To paraphrase Carl Sagan: in one unremarkable galaxy among hundreds of billions, there is an unremarkable star among hundreds of billions of stars in that one galaxy. Around that star revolves a world with life. Some people who live on that world believe they are the center of the universe.
Sagan nicely puts into perspective how absurd it is to believe, given our current knowledge of the cosmos, that we are the center of all things, either physically at the literal center, or metaphorically as in, we are the most important things in the universe. This is a childish view, held by our ancestors because they couldn’t know any better. Science, as Stephen Gould noted, is partly a process of smashing pillars of human narcissism. Neither the earth, nor our sun, nor our galaxy are at the center of the universe. The universe, it turns out, has no center. Neither are humans at the pinnacle of the evolutionary tree – we are just one twig, and every other twig has just as much evolutionary history behind it as we do.
Humans are certainly the most encephalized species on the planet, with by far the most advanced culture and technology, so we are special in that sense. Every time, however, scientists believe they have nailed down something that is unique about humans, some researcher finds that chimps (our closest cousins), or even other species, can do it too. We are part of the animal kingdom, part of this physical world, the result of natural processes that seem ubiquitous throughout the universe.
This view of the universe and ourselves, a view that has been hard won over centuries of ego-smashing scientific discoveries, is very different from the world view held by herders and farmers living thousands of years ago. Yet, that primitive, prescientific, egocentric, and tiny world view still holds sway over many people living today.
Take Ken Ham (please) – he recently wrote a post in which he attacks the modern scientific world view as a “desperate” and “secularist” attempt to prove evolution in order to rebel against God. It is a little window into the mind of extreme fundamentalists.
Even in his speculations about the motivations of scientists, he believes that it is all about him and his religious beliefs. He equates scientists and secularists, as if they are defined by rejecting his religious faith. He thinks it is all about rebelling against his God, rather than simply discovering the nature and state of the universe.
He also thinks were are still trying to prove evolution, when in fact evolution is already an established scientific fact.
You see, according to the secular, evolutionary worldview there must be other habited worlds out there. As the head of NASA, Charles Borden, puts it, “It’s highly improbable in the limitless vastness of the universe that we humans stand alone.” Secularists cannot allow earth to be special or unique—that’s a biblical idea (Isaiah 45:18). If life evolved here, it simply must have evolved elsewhere they believe.
Clearly he thinks that scientists are rejecting a biblical idea, rather than accepting what the science tells us – that there is no reason to think that the earth is unique.
Reading creationists write on such topics also gives me the impression that they have never tried to wrap their minds around how truly big the universe is. As was said in the movie Contact, if this is all for us, it seems like an awful waste of space. There are hundreds of billions of galaxies each with hundreds of billions of stars, many with planets, including earth-like planets. That is just in the part of the universe we can observe, but there’s much more.
We do get a (sort of) prediction from Ham:
And I do believe there can’t be other intelligent beings in outer space because of the meaning of the gospel. You see, the Bible makes it clear that Adam’s sin affected the whole universe. This means that any aliens would also be affected by Adam’s sin, but because they are not Adam’s descendants, they can’t have salvation. One day, the whole universe will be judged by fire, and there will be a new heavens and earth. God’s Son stepped into history to be Jesus Christ, the “Godman,” to be our relative, and to be the perfect sacrifice for sin—the Savior of mankind.
This does make me wonder what will happen if we do contact intelligent aliens. What will the Ken Hams of the world believe – that they are demons, that they are not truly self-aware, that they are damned to human hell? If we one day have relations with an alien civilization, will we have to deal with fundamentalists and their bizarre beliefs that aliens can’t exist, or that God will one day wipe out their entire civilization for the salvation of earth?
When the authors of the bible wrote their fables, they believed the earth was the entire universe, and everything in the sky was close and revolved about the earth. Ham is trying to apply that primitive world view to the universe as we now understand it, with all its vastness. The result is beyond absurd.
Ham does not think we should be investing in looking for extraterrestrial life, writing:
I’m shocked at the countless hundreds of millions of dollars that have been spent over the years in the desperate and fruitless search for extraterrestrial life.
This is a good example of primitive and firmly held beliefs squashing curiosity and exploration.
Yesterday, July 20th, was the 45th anniversary of Apollo 11 landing on the surface of the moon, and Neil Armstrong and Buzz Aldrin becoming the first and second humans to walk on the surface of another world. This is, to be sure, one of the greatest achievements of the human species.
There are those, however, who claim that we never sent astronauts to the moon, that the entire thing was an elaborate hoax by the US, meant to intimidate our rivals with our spacefaring prowess. As is typical of most grand conspiracy theories, they have no actual evidence to support their claim. None of the many people who would have to have been involved have come forward to confess their involvement. No government documents have come to light, no secret studios have been revealed. There is no footage accidentally revealing stage equipment.
What the moon hoax theorists have is anomaly hunting. This is the process of looking for something – anything – that does not seem to fit or that defies easy explanation, and then declaring it evidence that the standard story if false. Conspiracy theorists then slip in their preferred conspiracy narrative to take its place. Sometimes they are more coy, claiming to be “just asking questions” (also known as jaqing off), but their agenda is clear.
Genuine anomalies are of significant interest to science and any investigation, no question. For an apparent anomaly to be useful, however, mundane explanations need to be vigorously ruled out (conspiracy theorists tend to skip that part). Only when genuine attempts to explain apparent anomalies have failed to provide any plausible explanation should it be considered a true anomaly deserving of attention.
At that point the answer to the anomaly is, “we currently don’t know,” not “it’s a conspiracy.”
The reason that anomalies, in and of themselves, are not very predictive that something unusual is going on, is that they represent one method of mining vast amounts of data looking for desired patterns. Conspiracy theorists, in essence, make the argument (or simply implication) that where there is smoke there is fire, and then offer apparent anomalies as the smoke. This is a false premise, however. If apparent anomalies count as smoke, then there is smoke everywhere, even without fires.
In other words, any historical event is going to have countless moving parts, curious details, apparent coincidences, and complex chains of contingency. Further, people themselves often have complex motivations contingent upon the quirky details of their lives. All of this is raw material for apparent anomalies. It would be remarkable if you couldn’t find apparent anomalies when combing through the details of an historical event.
Here are some of the alleged anomalies that moon hoax conspiracy theorists have pointed out over the years. One major category is photographic. They point to the lack of stars in the moon’s sky, the visibility of astronauts with the sun behind them, and the non-parallel shadows of different objects lit by the same source.
These all derive from the fact that the moon is an unfamiliar location for photography. These apparent anomalies all have simple explanations. The stars are simply washed out. The landscape is uneven, hence the non-parallel shadows. And the moon’s surface is highly reflective, providing the fill light to make the front of astronauts visible even when the sun is behind them.
Another unfamiliar property of the moon is the lack of atmosphere, allowing the flag to flap for a long time once moved by an astronaut. Without air to dampen the oscillations, they continue.
One more technical point often raised is the claim that the astronauts would have been killed by radiation from the Van Allen belts and cosmic rays. This is simply untrue, however. The Apollo 11 astronauts received a total of about 11 millisieverts of radiation, with a lethal dose being about 8,000 milisieverts (or 8 sieverts) NASA’s limit for lifetime exposure to radiation is 1 sievert, about what astronauts would get on a one-way trip to Mars. What saved the Apollo astronauts was the brief total overall time exposed to radiation. The longest Apollo missions lasted 12 days.
Beyond the lack of evidence for a conspiracy, and the non-anomaly anomalies, there is a huge plausibility problem with the moon hoax conspiracy. Why haven’t other countries, like Russia, ever come forward with evidence that their tracking does not support NASA’s story? Where did all the moon rocks come from (and don’t say meteorites, those would look different due to their travel through the atmosphere, and we would never have found so many from the moon).
It’s unlikely the US could have pulled off the hoax, and some have argued it would have actually been easier to just send astronauts to the moon.
There is also undeniable evidence that there are human artifacts on the moon. Anyone with the equipment and knowledge can fire up a laser and bounce the beam off a corner reflector left on the surface of the moon.
Conspiracy theorists long have said that if we went to the moon, why are their no pictures from telescopes. Well, telescopes do not have the resolving power, but moon probes do. The Lunar Reconnaissance Orbiter has taken pictures of multiple Apollo landing sites, showing equipment left behind, and the tracks made by the astronauts.
The evidence is overwhelming and undeniable that NASA sent multiple missions to the moon, leaving behind footprints and equipment and bringing back moon rocks and history.
Conspiracy theorists, however, deny the undeniable with flimsy and easily refutable claims of alleged anomalies. They have no evidence to support their theory, and cannot put forward even a coherent narrative. They do tell us something about the human capacity for motivated reasoning and self-deception.
In a way, therefore, the Apollo missions (with their attendant conspiracy theories) represent the best and worst aspects of human potential.
The Guardian’s headline reads: Clear differences between organic and non-organic food, study finds. While this article was better than most in including some caveats, it was clearly favorable to the conclusions in the study, and failed, in my opinion, to properly put the new study into an informative context.
How does this new study add to the literature looking at the safety and health effect of organic produce vs conventional produce?
First, the study is a meta-analysis of 343 prior studies looking at nutrient content, pesticide, and heavy metal contamination of produce. It is not a collection of any new data. A meta-analysis is very tricky to conduct well – it does not improve the quality of the data going into the analysis, only the statistical power. Further it introduces another layer of potential bias (another researcher degree of freedom) in which studies are chosen for the analysis.
This study used very open criteria, and therefore included more lower-quality studies (likely to be false positive or show the bias of the researchers) than other meta-analyses.
Whenever I am trying to quickly grasp the bottom line of any scientific question, I look for a consensus among several independent systematic reviews. If multiple reviewers are looking at the same body of research and coming to the same conclusion, that conclusion is likely reliable.
In this case, there are three other recent large systematic reviews and meta-analyses of the same research on the nutritional content and safety of organic vs conventional produce. The other three studies all came to the opposite conclusion as the current study. A 2009 review by Dangour et. al. concluded:
“On the basis of a systematic review of studies of satisfactory quality, there is no evidence of a difference in nutrient quality between organically and conventionally produced foodstuffs. The small differences in nutrient content detected are biologically plausible and mostly relate to differences in production methods.”
“From a systematic review of the currently available published literature, evidence is lacking for nutrition-related health effects that result from the consumption of organically produced foodstuffs.”
“After analyzing the data, the researchers found little significant difference in health benefits between organic and conventional foods. No consistent differences were seen in the vitamin content of organic products, and only one nutrient — phosphorus — was significantly higher in organic versus conventionally grown produce (and the researchers note that because few people have phosphorous deficiency, this has little clinical significance).”
Older reviews by other researchers came to the same negative conclusions. This latest review, therefore, is the outlier. While I don’t think that possible conflicts of interest are definitive in analyzing research, it is informative, especially when there are disagreements. The negative reviews of the data were independently funded. This latest study was partially funded by the Sheepdrove Trust, which funds research supportive of organic farming. If, for example, the only study out of several that was favorable to a particular drug was funded by the drug manufacturer, that would be significant.
It is likely that the inclusion of weaker studies biased the results of this latest analysis in favor of a false positive. The authors claim that they did an analysis and removing the weaker studies did not have a significant effect on the results, but I find this unlikely as removing the weaker studies would essentially make the current analysis more similar to the other analyses, which found no significant differences.
The reporting about this study, based on how it was presented, is also misleading in other respects. They claim that anti-oxidants have proven health benefits, but this is misleading. The effect of antioxidants on health is more complex, and I don’t think it’s fair to conclude that higher antioxidant levels are clearly beneficial.
The study also found that organic produce has lower levels of protein, fiber, and nitrates, but these findings were strangely missing from the abstract’s conclusions and from uncritical reporting about the study.
Regarding pesticides, there is agreement that the data shows higher residue levels of those pesticides tested in conventional produce. However, these levels are well below safety limits, so this likely has no health effect. Organic proponents argue that the cumulative effect is what matters, even if individual levels are safe, but this is pure speculation.
Further – these results are rigged and misleading. The studies only test for conventional pesticides, so of course there are more of these on conventional produce. They don’t test for organic pesticides, those used in organic farming. There is absolutely no reason to conclude that organic pesticides are safer than synthetic pesticides, their safety is assumed by organic proponents because they are “natural,” a clear example of the naturalistic fallacy.
The cadmium issue is similar in that, while levels were higher in conventional produce, the levels are still well below safety limits. Again organic proponents argue that the levels accumulate, but they have no evidence for this.
Three recent systematic reviews of hundreds of studies concluded that there is no significant difference in nutrient content between organic and conventional produce. Now this is one meta-analysis that concludes that there are higher levels of anti-oxidants (and lower levels of protein and fiber). The simplest explanation for the difference is that the independent studies were of better quality by being more exclusive of lower quality studies.
The pesticide issue is fearmongering, in my opinion, as the levels in conventional produce are well below safety limits. The analysis also ignores organic pesticides, without any justification, in my opinion.
Even if you believe that there are differences in organic vs conventional produce, it is not clear that any particular method of organic farming is the cause. There are many differences in practice, and combining them together philosophically into “organic” farming is misleading and counterproductive. Further, it is possible that the overall smaller size of organic produce simply concentrates some nutrients, but this does not mean the consumer is getting overall more nutrients.
I also agree with Richard Mithen, leader of the food and health program at the Institute of Food Research, who is quoted as saying:
“The additional cost of organic vegetables to the consumer and the likely reduced consumption would easily offset any marginal increase in nutritional properties, even if they did occur, which I doubt,” Mithen said. “To improve public health we need to encourage people to eat more fruit and vegetables, regardless of how they are produced.”
The higher cost of organic produce, due to the lower productivity or organic farming and the premium created by the marketing hype of organic food, could potentially lead to overall reduced consumption of fruits and vegetable, and therefore, ironically, be a net negative to nutrition.
Just eat plenty of fruits and vegetables, and if you’re worried about the tiny residue of pesticides, wash them before eating (whether organic or conventional).
In 2013 the European Commission awarded $1.3 billion to a project to simulate the human brain in a supercomputer. While everyone is excited about this prospect, and welcomes the infusion of cash, recently the project has come under public criticism.
More than 180 neuroscientists signed an open letter criticizing the way the project is being managed. The letter states:
“We believe the HBP is not a well-conceived or implemented project and that it is ill suited to be the centerpiece of European neuroscience.”
There appear to be two main points to the criticism – the first is that the money is largely going to computer scientists to create the software that will simulate the human brain. The neuroscientists complain that while the project is being sold to the public as a neuroscience project, in reality it is an IT project.
The problem here, they argue, is that we have not yet sufficiently mapped the connections within the brain to model them. The project, therefore, as currently constructed, is premature. This is certain to lead to failure and create bad press for European neuroscience.
They argue that the project should be funding the neuroscience that will create the data that the programmers will need for their simulation. Defenders argue that the neuroscience is already happening. The project was always meant to be an IT project to simulate the brain, not fund basic neuroscience.
The neuroscientists may have a point, in my opinion. Providing a large amount of funding for research can be tricky business. The potential problem is that the funding, if it is targeted, is like putting a giant thumb on the scale. Researchers need resources to conduct research, and therefore they will follow the funding. In other words, instead of researchers spending their time and effort on projects they find interesting and plausible, they will spend their time and effort on projects that are funded, even if they are not ideal.
By funding a specific goal prematurely, especially with such a large amount of cash, the project can create a massive waste of time and energy.
We often see this in medicine, where grassroots or patient groups raise money to fund research to find a cure. This could prematurely shift researchers into researching treatments, rather than asking more basic questions, and shift the balance away from what is optimal. It actually might slow progress toward a cure. The eventual failure and disappointment is also a PR disaster.
The second criticism comes from cognitive neuroscientists. The issue here is whether or not the brain project should approach the simulation problem more from the bottom-up or the top-down. In other words, should they focus on individual neurons and basic brain architecture (bottom-up), or should they focus on how the brain works and is connected in relation to the higher cognitive functions (top-down). This is a bit of a false dichotomy, but the question is a matter of emphasis.
The cognitive neuroscientists feel they are being left out in the cold and their contributions to the project are being neglected.
With 1.3 billion dollars on the table a cynical person might view this all as a fight over funding. Everyone thinks their contribution to the project deserves a larger slice of the pie. But even if you endorse this view, it is a fight worth having. Let everyone make the best case they can for their view and allocate resources accordingly.
This gets to the heart of the criticism, that the project manager, Henry Markram, is not doing that (or at least not well). The letter calls for a more transparent and open process of setting priorities for the project.
The leaders of the project have also recently issued a public response. They begin:
The members of the HBP are saddened by the open letter posted on neurofuture.eu on 7 July 2014, as we feel that it divides rather than unifies our efforts to understand the brain. However, we recognize that the signatories have important concerns about the project. Here we try to clarify some of the main issues they touch on. We also invite the signatories to discuss their concerns in a direct scientific exchange with scientists leading the HBP and its subprojects.
They also emphasize that in January 2015 the project will undergo its next annual review. This review is likely to get a lot of attention.
The Human Brain Project is certainly very exciting. The recent hubbub is unremarkable in that such conflicts over funding and priority are common, it is simply magnified by the amount of funding of one specific project. The discussion seems healthy, however, addressing reasonable questions and concerns, and calling for transparency and dialogue.
I would not minimize it as “just” a fight over funding – questions of funding are largely scientific questions.
It is also worth pointing out that the overall vision of the project is a good one. The next 50 years or so are likely to be a time of remarkable progress toward this very goal – simulating the human brain in a computer. It seems to me that the effort to map and “reverse engineer” the brain is working symbiotically with the effort to simulate the brain. Each project informs the other.
Of course we need knowledge of the brain in order to simulate it, so the IT end needs the neuroscience. But also, simulating the brain, or even specific components of the brain, is a great way to ask questions about how the brain works.
In the end, it’s all good.
Alternative Medicine’s best friend, and in my opinion largely responsible for what popularity it has, is a gullible media. I had thought we were turning a corner, and the press were over the gushing maximally clueless approach to CAM, and were starting to at least ask some probing questions (like, you know, does it actually work), but a 2006 BBC documentary inspires a more pessimistic view.
The documentary is part of a BBC series hosted by Kathy Sykes: Alternative Medicine, The Evidence. This episode is on acupuncture. The episode is from 2006, but was just posted on YouTube as a “2014 documentary.” Unfortunately, old news frequently has a second life on social media.
First, let me point out that Sykes is a scientist (a fact she quickly points out). She is a physicist, which means that she has the credibility of being able to say she is a scientist but has absolutely no medical training. It’s the worst case scenario – she brings the credibility of being a scientist, and probably thinks that her background prepares her to make her own judgments about the evidence, and yet clearly should have relied more on real experts.
She does interview Edzard Ernst in the documentary, but he mainly just says generic statements about science, rather than a thorough analysis of specific claims. I wonder what gems from him were left on the cutting room floor.
The documentary does get better in the second half, as she starts to mention things like placebo effects, and the problems with the evidence-base for acupuncture. But she follows a disappointing format – setting up a scientific premise, then focusing on the positive evidence. There is a clear narrative throughout, that acupuncture is amazing and surprising.
A few examples illustrate my point. She showcases a patient in China having open heart surgery without general anesthesia, but with acupuncture “instead.” The framing of the case is massively biased to exaggerate the role of acupuncture. Then, tucked into the reporting, she mentions that the patient had sedation and local anesthesia (her chest was numbed), as if this is a tiny detail. There is no mention of whether or not you could have the same procedure with conscious sedation and local anesthesia but without the acupuncture.
In the end she perpetuated the myth of acupuncture anesthesia without putting the case into any perspective.
The worst part of the documentary, however, was when she came to evaluating the clinical evidence for acupuncture. After gushing over all the usual nonsense about chi, life force, holistic Eastern medicine, and setting up the audience for how magically wonderful acupuncture is, she then gives us the “but I’m a scientist” bit. Here is where her failure is greatest.
She frames her approach to the evidence as – well, most studies are small, disappointing, and negative, finally a large well-controlled trial was done on migraine, and this study showed that acupuncture works. She then repeats that process with acupuncture for knee osteoarthritis, relying on the Berman trial as if one trial can be definitive.
In other words, the approach she takes is to rely on a single trial, presenting the trial as if it is definitive (finally answering the question), and then concludes that we can “safely say” that acupuncture works for these indications.
This is profoundly wrong. We always need multiple trials with a consistent effect, as revealed by systematic reviews. For example, the The American Academy of Orthopaedic Surgeons reviewed the evidence, and made a strong recommendation against acupuncture:
There were five high- and five moderate- strength studies that compared acupuncture to comparison groups receiving non-intervention sham, usual care, or education. The five moderate-strength studies were included because they reported outcomes that were different than the high-strength evidence. High-strength studies included: Berman et al, 61 Suarez-Almazor et al.,62 Weiner et al.,63 Williamson et al.64 and Taechaarpornkul et al.65 Moderate-strength studies included: Sandgee et al.,66 Vas et al.,67 Witt et al.68 and Berman et al. 69
The majority of studies were not statistically significant and an even larger proportion of the evidence was not clinically significant. Some outcomes were associated with clinical- but not statistical- significance. The strength of this recommendation was based on lack of efficacy, not on potential harm.
This is in line with other reviews – overall the effects are not statistically significant, and in those studies that find an effect it is generally not clinically significant (meaning that it is likely background noise and placebo effects).
The same is true for migraine. Reviews of the evidence are consistent with placebo effects only.
After mangling the clinical evidence, Sykes then gives a credulous review of the fMRI evidence. Gee – when you stick needles into the skin, stuff happens in the brain. There is no mention of how tricky it is to perform and to interpret such studies. There is no mention of anomaly hunting, or the need to confirm the results, and find out what they actually mean.
An article in The Guardian by Simon Singh nicely attacks this stunt.
To the average viewer, they will see – wow, science shows that acupuncture has a real effect on the brain, and something to do with pain.
Sykes BBC review of acupuncture was an unmitigated fail. Throughout she follows a clear narrative – as a scientist, she was initially skeptical, but then was surprised to find that there really is something to acupuncture. She fails to put any of the evidence she presents into perspective, she fails to give a real skeptical view, or to even mention systematic reviews.
In the end she comes to a conclusion that, in my opinion, is the opposite of what science and the evidence say. She failed as both a scientist and a journalist, and did a disservice to anyone watching her documentary.
The series did receive significant criticism at the time – like this article from Ben Goldacre – and now that it has been posted on YouTube as if it were new, I guess we need a new round of criticism as well.
Part of the mission of SBM is to continually prod discussion and examination of the relationship between science and medicine, with special attention on those beliefs and movements within medicine that we feel run counter to science and good medical practice. Chief among them is so-called complementary and alternative medicine (CAM) – although proponents are constantly tweaking the branding, for convenience I will simply refer to it as CAM.
Within academia I have found that CAM is promoted largely below the radar, with the deliberate absence of public debate and discussion. I have been told this directly, and that the reason is to avoid controversy. This stance assumes that CAM is a good thing and that any controversy would be unjustified, perhaps the result of bigotry rather than reason. It’s sad to see how successful this campaign has been, even among my fellow academics and scientists who should know better.
The reality is that CAM is fatally flawed in both philosophy and practice, and the claims of CAM proponents wither under direct light. I take some small solace in the observation that CAM is starting to be the victim of its own success – growing awareness of CAM is shedding some inevitable light on what it actually is. Further, because CAM proponents are constantly trying to bend and even break the rules of science, this forces a close examination of what those rules should actually be, how they work, and their strengths and weaknesses.
This brings me to the specific topic of this article – the dreaded p-value. The p-value is a frequentist statistical measure of the data of a study. Unfortunately it has come to be looked at (by non-statisticians) as the one measure of whether or not the phenomenon being studied is likely to be real, even though that is not what it is and is never what it was meant to be.
As an aside, this trend was likely driven by the need for simplicity. People want there to be one simple bottom line to a study, so they treat the p-value that way. It’s like evaluating the power of a computer system solely by its clock speed, one number, rather than considering all the components.
A recent paper by Pandolfi and Carreras nicely deconstructs the myth of the p-value and show how p-value abuse is especially problematic within the world of CAM (thanks to Mark Crislip for bringing this paper to my attention) – “The faulty statistics of complementary alternative medicine (CAM)“.
For background, the p-value is the probability that the data in an experiment would demonstrate as much or more of a difference between the intervention and control give the null hypothesis. In clinical studies we can rephrase this to: what is the probability that the treatment group would be as different from the control group or more assuming the treatment has no actual effect? Many people, however, misinterpret the p-value as meaning – what is the probability that the treatment works.
Pandolfi and Carreras correctly point out that this is committing a formal logical fallacy, the fallacy of the transposed conditional. To illustrate this they give an excellent example. The probability of having red spots in a patient with measles is not the same as the probability of measles in someone who has red spots.
In other words, the p-value tells us the probability of the data given the null hypothesis, but what we really want to know is the probability of the hypothesis given the data. We can’t reverse the logic of p-values simply because we want to.
No worries – Bayes Theorem comes to the rescue. This is precisely why we at SBM have largely advocated taking a Bayesian approach to scientific questions. A Bayesian approach is ironically how people generally operate. We have prior beliefs about the world, and we update those beliefs as new information comes in (unless, of course, we have an emotional attachment to those prior beliefs, but that’s another article).
In science the logic of the Bayesian analysis is essentially: establish a prior probability for the hypothesis, then look at the new data and calculate how that new data affects the prior probability, giving a post probability.
Pandolfi and Carreras point out that, ironically, this is how doctors function in everyday clinical thinking. When we see a patient we determine the differential diagnosis, a list of possible diagnoses from most likely to least likely. When we order a diagnostic test for a specific diagnosis on the list, we first consider the pre-test probability of the diagnosis. This is based upon the prevalence of the disease and how closely the patient matches the demographics, signs, and symptoms of that disease. We then apply a diagnostic test that has a certain specificity and sensitivity, and based on the results we determine the posterior probability of the diagnosis.
Therefore, the pre-test probability is essential to determining the likelihood that a diagnostic test is either a false positive vs a true positive, or a false negative vs a true negative. You can’t properly interpret the results of the test without knowing the pre-test probability.
Ironically, this logic is abandoned when evaluating scientific research. In fact, the main flaw in the way evidence-based medicine is applied is that it ignores the pre-test probability, and relies heavily on an indirect measure (the p-value) in isolation to interpret test results. If applied to clinical medicine, such a process would constitute gross malpractice.
To drive this point home a little further, using a p-value in isolation in a clinical study to determine if the phenomenon under study is real is like using a non-specific diagnostic test to determine that a patient has a very rare disease, ignoring predictive value and the possibility of a false positive test. As experienced clinicians understand, if a disease is truly rare, then even a reasonably specific test is far more likely to generate a false positive than a true positive.
The analogy here is this – when studying a phenomenon that is unlikely, a significant p-value is far more likely to be a false positive than a true positive. This is why p-values are especially problematic when applied to CAM.
CAM modalities are alternative largely because they did not emerge from mainstream scientific thinking. In many cases, the claims made are incompatible with modern science. Homeopathy, for example, would require rewriting the physics, chemistry, physiology, and biology textbooks to a significant degree. Apparently violating basic laws of science at the very least renders a hypothesis equivalent to a rare disease – having a low prior probability. Therefore, even with an impressive looking p-value of 0.01, the probability could still be overwhelming that the phenomenon being tested is not real and the outcome is a false positive.Conclusion
The Pandolfi and Carreras paper nicely illustrates one of the core principles of science-based medicine – putting the science back into medicine. Evidence is not enough, we also have to put that evidence into the context of our basic scientific understanding of the world, expressed as a prior probability. It may not be possible to have a rigorous quantitative expression of that prior probability, but we can at least use representative figures.
For example, Pandolfi and Carreras use a prior odds of 9:1 against to represent the skeptical position. This is being generous, in my opinion, as I would give odds of 999:1 at least for claims such as homeopathy. But even using the highly conservative odds of 9:1, even a p-value of 0.01 does not favor the phenomenon being real, but rather the null hypothesis.
The two take-home messages here are these: Don’t rely on p-values as the sole measure of a study’s outcome. Favor, rather, a Bayesian analysis. Even in the absence of a formal Bayesian analysis, an informal Bayesian approach will help put the study results into context.
Second – we should probably raise the bar for statistical significant. A p-value of 0.05 is not as impressive as most people might think. Mark suggested we set the bar at 0.001 as a first approximation, and then we go down from there based upon prior probability.
Doing this will work massively against the interests of CAM, because of their low prior probability. But this is in the interests of good medicine.
While cleaning out some old files, I was delighted to find an article I had clipped and saved 35 years ago: a “Sounding Boards” article from the January 25, 1979 issue of The New England Journal of Medicine. It was written by Joseph E. Hardison, MD, from the Emory University School of Medicine; it addresses the reasons doctors order unnecessary tests, and its title is “To Be Complete.” Today we have many more tests that can be ordered inappropriately and the article is even more pertinent and deserves to be re-cycled. He says,
When challenged and asked to defend their reasons for ordering or performing unnecessary tests and procedures, the reasons given usually fall under one of the following excuses…
He lists ten excuses:
I can think of two more excuses:
Too many tests can be hazardous to your health for several reasons:
The need for a test can be informed by scientific studies. Does routinely ordering x-rays on all patients with ankle injuries improve outcomes? No, it doesn’t. Simple sprains are much more common than fractures, and x-rays expose patients to radiation. Science-based guidelines like the Ottawa ankle rules have been developed to help clinicians decide when to order tests.
Another consideration is “what difference will the test make?” What are we going to do differently if the result is x rather than y? If we can’t answer that question, we probably shouldn’t be doing the test. That’s particularly pertinent in genomic testing, where patients may be told they are at high risk of developing a disease that they can’t do anything to prevent.
And then there’s CAM. Many tests offered by CAM practitioners have not been validated, some are known to be bogus, and some are used to diagnose bogus diseases.
And then there are the patients who demand tests because of something they read on the Internet.Conclusion
Every year there are more tests available for doctors to order. Doctors should not order any of them without good reason. Doctors should be guided by good judgment grounded in science. Patients should not be hesitant to question their doctors if they don’t understand why a test is being done or what difference the results will make.
It’s been a while since I wrote a substantive post for this blog about the Houston cancer doctor and Polish expat Stanislaw Burzynski who claims to have a fantastic treatment for cancer that blows away conventional treatment for cancers that are currently incurable. The time has come—and not for good reasons. The last time was primarily just a post announcing my article about Burzynski being published in Skeptical Inquirer. When last we saw Stanislaw Burzynski on this blog, it was a post that I hated to write, in which I noted that the Food and Drug Administration (FDA) had caved to patient and legislator pressure and allowed compassionate use exemptions (otherwise known as single patient INDs) to continue. The catch? Cynically, the FDA put a condition on its decision, specifically that no doctor associated with Burzynski nor Burzynski himself could administer the antineoplastons. This set off a mad scramble among Burzynski patients wanting ANPs to find a doctor willing to do all the paperwork and deal with Burzynski to administer ANPs. The family of one patient, McKenzie Lowe, managed to succeed.
It’s hard for me to believe that it’s been almost three years since I first started taking an interest in Burzynski. Three long years, but that’s less than one-twelfth the time that Burzynski has been actually been administering an unproven cancer treatment known as antineoplastons (ANPs), a drug that has not been FDA-approved, to patients, which he began doing in 1977. Yes, back when Burzynski got started administering ANPs to patients, I was just entering high school, the Internet as we know it did not exist yet (just a much smaller precursor), and disco ruled the music charts. It’s even harder for me to believe, given the way that Burzynski abuses clinical trial ethics and science, that I hadn’t paid much attention to him much earlier in my blogging career. After all, I’m a cancer surgeon, and here’s been this guy treating patients with advanced brain cancers using peptides that, according to Burzynski, do so much better against what are now incurable tumors than standard of care while charging huge sums of money to patients on “clinical trials.” It might be a cliché to quote the Dead this way, but what a long, strange trip it’s been. Because there has been a major development in this saga whose context you need to know to understand, I’m going to do a brief recap. Long-time regulars, feel free to skip the next couple of paragraphs, as they just try to bring people up to date and include a lot of links for background, or, if you haven’t already, read this summary of Burzynski’s history published earlier this year in Skeptical Inquirer. Newbies, listen up. Read the next two paragraphs. You need to know this to understand why I’m so unhappy.
Incidents that I never blogged about here (but fortunately a certain “friend” of mine did on another blog) that have occurred since the FDA caved and (sort of) lifted the partial clinical hold on ANPs included:
Meanwhile, since he received that warning letter from the FDA late last year, in addition to making unconvincing claims excusing the issues described in the FDA warning letter, he’s been trying to enlist patients, both cute children and the semi-famous, to persuade legislators to pressure the FDA to let his clinical trials open again after they had been put on a partial clinical hold in the wake of the death of a child from hypernatremia (too much sodium in the blood), as reported by Liz Szabo in USA TODAY. All the while Burzynski has continued to charge patients large sums of money, while bragging that he didn’t charge for the actual ANPs. For those unfamiliar with the story, a partial clinical hold means that Stanislaw Burzynski can’t enroll any new patients on clinical trial but can continue to treat patients already enrolled.
Yes, three months ago—has it been that long already?—I noted with dismay that, under the onslaught of sympathetic cancer patients who understandably but incorrectly believe that Burzynski is their last chance to live, the FDA did indeed cave, although in the most weaselly way imaginable, stating that Stanislaw Burzynski could again enroll patients under compassionate use exemptions (also known as single patient INDs), even though the FDA warning letter had found gross deficiencies in the Burzynski Research Institute Institutional Review Board (BRI-IRB), which is run by an old crony of Burzynski’s who just so happens to be chair of the board of directors of the BRI and was found to be negligent in protecting patients by playing fast and loose with the regulations for enrolling patients on single patient INDs. The condition was that Burzynski himself or anyone working for him couldn’t be the physician treating the patients, leaving the patients to find a doctor willing to oversee the administration of ANPs.
And so it was for three months, with desperate patients with terminal brain tumors scrambling to find some doctor willing to do the paperwork to get the FDA to allow him to administer ANPs and also willing to work with such a disreputable character. McKenzie Lowe’s family, for instance, managed to find a retired family practitioner named Terry Bennett to agree to this. (More on this later.) Then, last week, this bombshell landed, courtesy of Liz Szabo again, in a story entitled “FDA gives controversial doc green light to restart work“:
The Food and Drug Administration has given a controversial Houston doctor the green light to resume administering experimental cancer treatments.
The FDA has lifted restrictions on a clinical trial run by Stanislaw Burzynski, who was the subject of a USA TODAY investigation last year. Burzynski, 70, has wrangled with state and federal medical authorities for nearly 40 years over his claims that he has discovered natural substances that can fight certain cancers.
You might think that my reaction upon reading this would be “WTF?” It wasn’t. However, that was only because I had had that reaction a week ago, when I read a press release from the BRI, “Burzynski Research Institute, Inc. Announces Lifting Of The FDA Partial Clinical Hold – Phase 3 Clinical Study Agreed Upon“:
The Burzynski Research Institute, Inc. (BRI) announced today that U.S. Food and Drug Administration (FDA) has notified the company that its partial clinical hold on its IND for Antineoplastons A10/AS2-1 Injections has been lifted. The FDA has determined that under its IND the Company may initiate its planned Phase 3 study in newly diagnosed diffuse, intrinsic, brainstem glioma. The Company is continuing discussions with the Agency in an effort to finalize additional details of the phase 3 study protocol for the potential clinical trial.
The FDA’s decision to lift the clinical hold marks an important step in the development of Antineoplastons for the treatment of various forms of brain tumors in the US. At the same time, the Company is evaluating possible next steps for the Antineoplastons clinical program given the current progress and anticipated resource requirements of the ongoing program.
That was when I had my “WTF?” moment. However, noting the obviously intentionally vague language of the press release, I decided to wait until I could learn more and obtain confirmation rather than to blog the press release. For one thing, Burzynski could have just been spinning furiously, and there could have been a lot less to this press release than met the eye. It wouldn’t be the first time. Also, I just couldn’t believe that the FDA would so horrifically fail patients with brain cancer yet again the way it now has. I could sort of see why the FDA issued its previous ruling that allowed other doctors to enroll patients in single patient INDs of ANPs. At the time, the FDA was under a lot of pressure from legislators being contacted by constituents about families in their state or district with brain cancer who wanted to be treated by Burzynski. Its cynical solution must have seemed downright Solmonic at the time to the administrators who thought of it. But this? There’s nothing in the press release that says that the conditions that led to the partial clinical hold were resolved. Yesterday, however, there was this in Szabo’s story:
In a statement issued Wednesday), the FDA confirmed that it has lifted its restrictions on Burzynski because he answered all of their questions. In particular, Burzynski addressed “common and serious (and in some cases fatal) adverse drug reactions, as well as accurate information on how often tumors shrink after treatment with antineoplastons.”
Notice something missing? I did. The FDA said nothing about the BRI-IRB, which was soundly chastised for approving single patient INDs without full meetings of the committee. It also said nothing about the massive conflicts of interest that exist in the IRB and how it can’t possibly be independent. Sure, if Burzynski talked the talk adequately, I might see how the FDA might be either snowed or too tired to fight any more. I could see how it might be tempted to let him open his bogus clinical trials again—but only if Burzynski were forced to use a truly independent IRB, not his crony-packed IRB that basically rubber stamps whatever it is he wants to do with no questions and no evident effort to protect the welfare and rights of clinical trial subjects. Any IRB worth its salt will refuse to approve a clinical trial now and then and/or issue warnings to principal investigators for inadequate documentation, too many adverse events, etc. Has the BRI-IRB ever done this? Not as far as anyone I know can tell. Of course, it’s not really up to the FDA to oversee the function of IRBs. Rather, it’s more a function of the Office of Human Research Protections (OHRP). Long have I wondered: Where the heck has the OHRP been all these years?
It’s not as though Burzynski isn’t up to his old tricks again, either. Even with another physician overseeing the treatment of McKenzie Lowe, he’s managing to find ways to charge patients huge sums of money, even as he isn’t charging them for the ANPs. Indeed, the other day, Dr. Bennett, the physician who is overseeing the treatment of McKenzie Lowe whom I mentioned near the beginning of this article, was featured in an article in a local newspaper “Dr. feels misled in cancer treatment costs“. The money quotes are here:
But there was something Bennett didn’t know.
Bennett’s decision was based, in part, on a newspaper article that said Burzynski had agreed to donate the medicine required for McKenzie’s treatment. But what Bennett didn’t know is that Burzynki [sic] planned to charge the family for the clinical costs associated with the therapy.
LaFountain said the first month’s bill is expected to be $28,000. Every month after that is expected to cost $16,000. The treatment usually lasts eight to 12 months.
And health insurance won’t cover a dime of it.
Bennett says a representative of the Burzynski Clinic called him on that date seeking payment for the first month of McKenzie’s therapy. Prior to that, Bennett, who is donating his services, thought Burzynski was doing the same.
Instead, said Bennett, “I’m supposed to be the bag man for all of this. They want me to collect the 30 grand for the family and send it to Burzynski.”
Elsewhere, Bennett said, “It [the Burzynski Clinic] meets all the criteria for a bait and switch operation.” Yes, even Dr. Bennett found out the hard way how the Burzynski Clinic operates. None of this is anything new. Burzynski has been trying to dodge this tactic for years by saying that he doesn’t charge patients for his actual drug, the ANPs, giving the impression that he’s not charging them much of anything. However, the reality is that costs can rapidly add up to hundreds of thousands of dollars, which is why fundraisers by families of Burzynski patients have been a feature surrounding the Burzynski Clinic operation for decades, as has been documented time and time and time again.
This is a doctor who wants to help and is willing to take risks, even if inappropriately in this case, and he feels used by Burzynski. Indeed, his comparing himself to a “bag man” is a particularly apt metaphor, because that’s what he is in this: A bag man. It’s his job to collect the cash from the family of a dying child and ship it to Burzynski. Ironically, this news story appeared on the very same day as Burzynski’s press release, and, of course, Szabo’s story appeared yesterday. Did the FDA know Burzynski was doing this? It strains credulity to think that the FDA didn’t know about this abuse of a desperately ill child’s family, given that information regarding how it will be paid for is part of a single patient IND application, indeed part of all clinical trial applications. Indeed, that Burzynski gets away with this is yet more evidence that his IRB is nothing more than a rubber stamp, because any independent IRB would ask some very hard questions about such an arrangement. Very hard questions indeed. It’s painfully obvious that the BRI-IRB has never asked hard questions any IRB should be asking about any clinical trial Burzynski has proposed or about how his clinical trials are being carried out.
It’s even worse than that, though. Check out what I found on a website devoted to penny stocks, “Burzynski Research Institute (BZYR) Reignites On FDA Approaval” [sic]:
Focusing since 1967 on the isolation of various biochemicals produced by the human body as part of the body’s possible defense against cancer, the penny stock of Burzynski Research Institute, Inc. (BZYR) has exploded on the scene thanks to the efforts of CEO and President Stanislaw R. Burzynski, M.D., Ph.D. After notification that the U.S. Food and Drug Administration lifted its partial clinical hold on its IND for Antineoplastons A10/AS2-1 Injections, shares of BZYR stock begun to trade and are putting on a good show thus far. Although only around 25 million shares issued and outstanding do not belong to the founder who is determined to treat of various forms of brain tumors, the large jump upwards has made this little biotech burz-worthy.
You know, with all the typos and English that sounds as though it were written by someone who isn’t a native English speaker, this made me suspicious that Burzynski or someone from Burzynski’s clinic wrote it. Either that, or the web page is written by non-English speakers, which seems likely given that I found the same sorts of weird-sounding sentence constructions in other articles and I know Burzynski’s people can produce serviceable English prose when they need to. Be that as it may, I have little doubt that the Burzynski clinic is trying to take advantage of the FDA decision to bolster its flagging finances, which have been reportedly hurting since the partial clinical hold was placed. Whether this article on a website hawking penny stocks has anything to do with it, I don’t know, but its appearance right around this time sure doesn’t seem coincidental to me.
So what happened? Why did the FDA cave so ignominiously? How could it ignore 37 years of Burzynski’s therapeutic misadventures and abuse of science and the clinical trial process? I have a few ideas, but none of them are satisfying, and all of them are speculation, ranging from educated to, well, just speculation. Back in the 1990s, it was powerful legislators like Joe Barton leaning on the FDA to let Burzynski be Burzynski. Today, there are no visible and obvious champions in Congress for Burzynski. Even so, that doesn’t necessarily mean that such congressional patrons don’t exist, given the campaign waged by patient families and the Burzynski clinic to get people to write their Congressmen and Senators. Another likely possibility is that the FDA is just tired. If it shuts Burzynski down, it will be portrayed for years as the bad guy denying patients a chance at life, and there will be an enormous court battle. If it decides to fight, the FDA could end up with years of litigation, and, given the FDA’s unfortunately limited budget, it has to pick its battles. Does it help the FDA’s mission overall if it drains so many resources fighting what it likely views as small fry like Burzynski that it finds late in the fiscal year that it can’t afford to go after a large drug company over lack of resources? I don’t know, but given how long Burzynski’s been at it, the FDA’s decision is still an extreme dereliction of its duty to the public.
Whatever the reason that the FDA caved, we’ll probably never know. We can make FOIA requests, but the FDA is notoriously tight with the information it permits to be releases because it’s forbidden from releasing information that might endanger trade secrets of the companies it regulates and it appears to take a fairly expansive view of what constitutes such information. At least, we’ll probably never know unless we can get a powerful Senator or Congressman (or two) interested and as outraged as we skeptics are. A Congressional investigation, as unlikely as that sounds, is probably the only thing that will get to the bottom of the FDA’s utter failure. It’s clear that in this case the FDA is no longer able or willing to protect the safety, finances, or rights of patients with advanced cancer.
Ed. Note: NOTE ADDENDUM
I daresay that I’m like a lot of you in that I spend a fair bit of time on Facebook. This blog has a Facebook page (which, by the way, you should head on over and Like immediately). I have a Facebook page, several of our bloggers, such as Harriet Hall, Steve Novella, Mark Crislip, Scott Gavura, Paul Ingraham, Jann Bellamy, Kimball Atwood, John Snyder, and Clay Jones, have Facebook pages. It’s a ubiquitous part of life, and arguably part of the reason for our large increase in traffic over the last year. There are many great things about Facebook, although there are a fair number of issues as well, mostly having to do with privacy and a tendency to use automated scripts that can be easily abused by cranks like antivaccine activists to silence skeptics refuting their pseudoscience. Also, of course, every Facebook user has to realize that Facebook makes most of its money through targeted advertising directed at its users; so the more its users reveal the better it is for Facebook, which can more precisely target advertising.
Whatever good and bad things about Facebook there are, however, there’s one thing that I never expected the company to be engaging in, and that’s unethical human subjects research, but if stories and blog posts appearing over the weekend are to be believed, that’s exactly what it did, and, worse, it’s not very good research. The study, entitled “Experimental evidence of massive-scale emotional contagion through social networks“, was published in the Proceedings of the National Academy of Sciences of the United States of America (PNAS), and its corresponding (and first) author is Adam D. I. Kramer, who is listed as being part of the Core Data Science Team at Facebook. Co-authors include Jamie E. Guillory at the Center for Tobacco Control Research and Education, University of California, San Francisco and Jeffrey T. Hancock from the Departments of Communication and Information Science, Cornell University, Ithaca, NY.IRB? Facebook ain’t got no IRB. Facebook don’t need no IRB! Facebook don’t have to show you any stinkin’ IRB approval!” Sort of.
There’s been a lot written over a short period of time (as in a couple of days) about this study. Therefore, some of what I write will be my take on issues others have already covered. However, I’ve also delved into some issues that, as far as I’ve been able to tell, no one has covered, such as why the structure of PNAS might have facilitated a study like this “slipping through” despite its ethical lapses.
Before I get into the study itself, let me just discuss a bit where I come from in this discussion. I am trained as a basic scientist and a surgeon, but these days I mostly engage in translational and clinical research in breast cancer. The reason is simple. It’s always been difficult for all but a few surgeons to be both a basic researcher and a clinician and at the same time do both well. However, with changes in the economics of even academic health care, particularly the ever-tightening drive for clinicians to see more patients and generate more RVUs, it’s become darned near impossible. So clinicians who are still driven (and masochistic) enough to want to do research have to go with where their strengths are, and make sure there’s a strong clinical bent to their research. That involves clinical trials, in my case cancer clinical trials. That’s how I became so familiar with how institutional review boards (IRBs) work and with the requirements for informed consent. I’ve also experienced what most clinical researchers have experienced, both personally and through interactions with their colleagues, and that’s the seemingly never-ending tightening of the requirements for what constitutes true informed consent. For the most part, this is a good thing. However, at times it does appear that IRBs seem to go a bit too far, particularly in the social sciences. This PNAS study, however, is not one of them.
The mile high view of the study is that Facebook intentionally manipulated the feeds of 689,003 English-speaking Facebook users between January 11th-18th, 2012 in order to determine whether showing more “positive” posts in a user’s Facebook feed was an “emotional contagion” that would inspire the user to post more “positive” posts himself or herself. Not surprisingly, Kramer et al found that showing more “positive” posts did exactly that, at least within the parameters as defined, and that showing more “negative” posts resulted in more “negative” posting. I’ll discuss the results in more detail and problems with the study methodology in a moment. First, here’s the rather massive problem. Where is the informed consent? This is what the study says about informed consent and the ethics of the experiment:
Posts were determined to be positive or negative if they contained at least one positive or negative word, as defined by Linguistic Inquiry and Word Count software (LIWC2007) (9) word counting system, which correlates with self-reported and physiological measures of well-being, and has been used in prior research on emotional expression (7, 8, 10). LIWC was adapted to run on the Hadoop Map/Reduce system (11) and in the News Feed filtering system, such that no text was seen by the researchers. As such, it was consistent with Facebook’s Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research. Both experiments had a control condition, in which a similar proportion of posts in their News Feed were omitted entirely at random (i.e., without respect to emotional content).
Does anyone else notice anything? I noticed right away, both from news stories and when I finally got around to reading the study itself in PNAS. That’s right. There’s no mention of IRB approval. None at all. I had to go to a story in The Atlantic to find out that apparently the IRB of at least one of the universities involved did approve this study:
Did an institutional review board—an independent ethics committee that vets research that involves humans—approve the experiment?
Yes, according to Susan Fiske, the Princeton University psychology professor who edited the study for publication.
“I was concerned,” Fiske told The Atlantic, “until I queried the authors and they said their local institutional review board had approved it—and apparently on the grounds that Facebook apparently manipulates people’s News Feeds all the time.”
Fiske added that she didn’t want the “the originality of the research” to be lost, but called the experiment “an open ethical question.”
This is not how one should find out whether a study was approved by an IRB. Moreover, news coming out since the story broke suggests that there was no IRB approval before publication.
Also, what is meant by saying that Susan Fiske is the professor who “edited the study” is that she is the member of the National Academy of Sciences who served as editor for the paper. What that means depends on the type of submission the manuscript was. PNAS is a different sort of journal. As I’ve discussed in previous posts regarding Linus Pauling’s vitamin C quackery, about which he published papers in PNAS back in the 1970s. Back then, members of the Academy could contribute papers to PNAS as they saw fit and in essence hand pick their reviewers. Indeed, until recently, the only way that non-members could have papers published in PNAS was if a member of the Academy agreed to submit their manuscript for them, then known as “communicating” it—apparently these days known as “editing” it—and, in fact, members were supposed to take the responsibility for having such papers reviewed before “communicating them” to PNAS. Thus, in essence a member of the Academy could get nearly anything he or she wished published in PNAS, whether written by herself or a friend. Normally, this ability wasn’t such a big problem for quality, because getting into the NAS was (and is still) so incredibly difficult and only the most prestigious scientists are invited to join. Consequently, PNAS is still a prestigious journal with a high impact factor, and most of its papers are of high quality. Scientists know, however, that sometimes Academy members used to use it as a “dumping ground” to publish some of their leftover findings. They also know that on occasion, when rare members fall for dubious science, as Pauling did, they could “communicate” their questionable findings and get them published in PNAS unless they’re so outrageously ridiculous that even the deferential editorial board can’t stomach publishing them.
These days, submission requirements for PNAS are more rigorous. The standard mode is now called Direct Submission, which is still not like that for any other journal in that authors “must recommend three appropriate Editorial Board members, three NAS members who are expert in the paper’s scientific area, and five qualified reviewers.” Not very many authors are likely to be able to achieve this. I doubt I could, and I know an NAS member who’s even a cancer researcher; I’m just not sure I would want to impose on him to handle one of my manuscripts. Now, apparently, what used to be “contributed by” is referred to as submissions through “prearranged editors” (PE). A prearranged editor must be a member of the NAS:
Prior to submission to PNAS, an author may ask an NAS member to oversee the review process of a Direct Submission. PEs should be used only when an article falls into an area without broad representation in the Academy, or for research that may be considered counter to a prevailing view or too far ahead of its time to receive a fair hearing, and in which the member is expert. If the NAS member agrees, the author should coordinate submission to ensure that the member is available, and should alert the member that he or she will be contacted by the PNAS Office within 3 days of submission to confirm his or her willingness to serve as a PE and to comment on the importance of the work.
According to Dr. Fiske, there are only “a half dozen or so social psychologists” in the NAS out of over 2,000 members. Assuming Dr. Fiske’s estimate is accurate, my first guess was that this manuscript was submitted with Dr. Fiske as a prearranged editor because there is not broad representation in the NAS of the relevant specialty. Why that is, is a question for another day, but apparently it is. Oddly enough, however, my first guess was wrong. This paper was a Direct Submission, as stated at the bottom of the article itself. Be that as it may, I remain quite shocked that PNAS doesn’t, as virtually all journals that publish any human subjects research, explicitly require authors to state that they had IRB approval for their research. Some even require proof of IRB approval before they will publish. Actually, in this, PNAS clearly failed to enforce its own requirements, which require that:
The authors did neither.
While it is true that Facebook itself is not bound by the federal Common Rule that requires IRB approval for human subjects research because it is a private company that does not receive federal grant funding and was not, as pharmaceutical and device companies do, performing the research for an application for FDA approval, the other two coauthors were faculty at universities that do receive a lot of federal funding and are therefore bound by the Common Rule. So it’s a really glaring issue that, not only is there no statement of approval from Cornell’s IRB from Jeffrey T. Hancock, but there is also no statement from Jamie Guillory that there was IRB approval from UCSF’s IRB. If there’s one thing I’ve learned in human subject ethics training at every university where I’ve had to take it, it’s that there must be IRB approval from every university with faculty involved in a study.
IRB approval or no IRB approval, the federal Common Rule has several requirements for informed consent in a checklist. Some key requirements include:
Now, I realize that in the social sciences, depending on the intervention being tested, “informed consent” standards might be less rigorous, and there are examples where it is thought that IRBs have overreached in asserting their hegemony over the social sciences, a concern that dates back several years. What Facebook has is not “informed consent,” anyway. As has been pointed out, this is not how even social scientists define informed consent, and it’s certainly nowhere near how medical researchers define informed consent. Rather, as Will over at Skepchick points out correctly, the Facebook Data Use Policy is more like a general consent than informed consent, similar to the sort of consent form a patient signs before being admitted to the hospital:
Or, if we want to look at biomedical research, which is the kind of research that inspired the Belmont Report, Facebook’s policy is analogous to going to a hospital, signing a form that says any data collected about your stay could be used to help improve hospital services, and then unknowingly participating in a research project where psychiatrists are intentionally pissing off everyone around you to see if you also get pissed off, and then publishing their findings in a scientific journal rather than using it to improve services. Do you feel that you were informed about being experimented on by signing that form?
That’s exactly why “consents to treat” or “consents for admission to the hospital” are not consents for biomedical research. There are minor exceptions. For instance, some consents for surgery include consent to use any tissue removed for research.
In fairness, it must be acknowledged that there are criteria under which certain elements of informed consent can be waived by an IRB. The relevant standard comes from §46.116 and requires that:
Here’s where we get into gray areas. Remember, all of these conditions have to apply before a waiver of informed consent can occur, and clearly all of them do not. D1, for instance, is likely true of this research, although from my perspective D2 is arguable at best, particularly if you believe that users of a commercial company’s service should have the right to know what is being done with their information. D3 is arguable either way. For example, it’s not hard to imagine sending out a consent to all Facebook users, and, given that Facebook has over a billion users, it’s not unlikely that hundreds of thousands would say yes. In contrast, D4 appears not to have been honored. There’s no reason Facebook couldn’t have informed the actual users who were monitored after the study was over what had been done. Even if one could agree that conditions D1-3 were already met, the IRB should have insisted on D4, because there’s no reason to suspect that doing so would have been inappropriate in the sense of altering the outcome of the experiment. No matter how you slice it, there was a serious problem with the informed consent for this study on multiple levels.
Even Susan Fiske admits that she was “creeped out” by the study. To me, that’s a pretty good indication that something’s not ethically right; yet she edited it and facilitated its publication in PNAS anyway. She also doesn’t understand the Common Rule:
“A lot of the regulation of research ethics hinges on government supported research, and of course Facebook’s research is not government supported, so they’re not obligated by any laws or regulations to abide by the standards,” she said. “But I have to say that many universities and research institutions and even for-profit companies use the Common Rule as a guideline anyway. It’s voluntary. You could imagine if you were a drug company, you’d want to be able to say you’d done the research ethically because the backlash would be just huge otherwise.”
No, no, no, no! The reason drug companies follow the Common Rule is because the FDA requires them to. Data from research not done according to the Common Rule can’t be used to support an application to the FDA to approve a drug. It’s also not voluntary for faculty at universities that receive federal research grant funding if those universities have signed on to the Federalwide Assurance (FWA) for the Protection of Human Subjects (as Cornell has done and UCSF appears to have done), as I pointed out when I criticized Dr. Mehmet Oz’s made-for-TV green coffee bean extract study. Nor should it be, and I am unaware of a major university that has refused to agree to “check the box” in the FWA promising that all its human subjects research will be subject to the Common Rule, which makes me wonder how truly “voluntary” it is to agree to be bound by the Common Rule. Moreover, as I was finishing this post, I learned that the study actually did receive some federal funding through the Army Research Office, as described in the Cornell University press release. [NOTE ADDENDUM ADDED 6/30/2014.] IRB approval was definitely required. I note that I couldn’t find any mention of such funding in the manuscript itself.The study itself
The basic finding of the study, namely that people alter their emotions and moods based upon the presence or absence of other people’s positive (and negative) moods, as expressed on Facebook status updates, is nothing revolutionary. It’s about as much a, “Well, duh!” finding as I can imagine. The researchers themselves referred to this effect an “emotional contagion,” because their conclusion was that Facebook friends’ words that users see on their Facebook news feed directly affected the users’ moods. But is the effect significant, and does the research support the conclusion? The researchers’ methods were described thusly in the study:
The experiment manipulated the extent to which people (N = 689,003) were exposed to emotional expressions in their News Feed. This tested whether exposure to emotions led people to change their own posting behaviors, in particular whether exposure to emotional content led people to post content that was consistent with the exposure—thereby testing whether exposure to verbal affective expressions leads to similar verbal expressions, a form of emotional contagion. People who viewed Facebook in English were qualified for selection into the experiment. Two parallel experiments were conducted for positive and negative emotion: One in which exposure to friends’ positive emotional content in their News Feed was reduced, and one in which exposure to negative emotional content in their News Feed was reduced. In these conditions, when a person loaded their News Feed, posts that contained emotional content of the relevant emotional valence, each emotional post had between a 10% and 90% chance (based on their User ID) of being omitted from their News Feed for that specific viewing. It is important to note that this content was always available by viewing a friend’s content directly by going to that friend’s “wall” or “timeline,” rather than via the News Feed. Further, the omitted content may have appeared on prior or subsequent views of the News Feed. Finally, the experiment did not affect any direct messages sent from one user to another.
For each experiment, two dependent variables were examined pertaining to emotionality expressed in people’s own status updates: the percentage of all words produced by a given person that was either positive or negative during the experimental period (as in ref. 7). In total, over 3 million posts were analyzed, containing over 122 million words, 4 million of which were positive (3.6%) and 1.8 million negative (1.6%).
The results were summed up in a single deceptive chart. Why do I call the chart deceptive? Easy, because charts were done in such a way as to make the effect look much larger by starting the y-axis at 5.0 in the graph that shows a difference between around 5.3 and 5.25 and starting it at 1.5 for a graph that showed a difference between 1.75 and maybe 1.73. See what I mean by looking at Figure 1:
This is another thing the authors did that I can’t believe Dr. Fiske and PNAS let them get away with, as messing with where the y-axis of a graph starts in order to make a tiny effect look bigger is one of the most obvious tricks there is. In this case, given how tiny the effect is, even if it was a statistically significant effect, it’s highly unlikely to be what we call a clinically significant effect.
But are these valid measures? John M. Grohol of PsychCentral was unimpressed, pointing out that the tool that the researchers used to analyze the text was not designed for short snippets of text, asking snarkily, “Why would researchers use a tool not designed for short snippets of text to, well… analyze short snippets of text?” and concluding that it was because the tool chosen, the LIWC, is one of the few tools that can process large amounts of text rapidly. He then went on to describe why it’s a poor tool to apply to a Tweet or a brief Facebook status update:
Length matters because the tool actually isn’t very good at analyzing text in the manner that Twitter and Facebook researchers have tasked it with. When you ask it to analyze positive or negative sentiment of a text, it simply counts negative and positive words within the text under study. For an article, essay or blog entry, this is fine — it’s going to give you a pretty accurate overall summary analysis of the article since most articles are more than 400 or 500 words long.
For a tweet or status update, however, this is a horrible analysis tool to use. That’s because it wasn’t designed to differentiate — and in fact, can’t differentiate — a negation word in a sentence.
Let’s look at two hypothetical examples of why this is important. Here are two sample tweets (or status updates) that are not uncommon:
“I am not happy.”
“I am not having a great day.”
An independent rater or judge would rate these two tweets as negative — they’re clearly expressing a negative emotion. That would be +2 on the negative scale, and 0 on the positive scale.
But the LIWC 2007 tool doesn’t see it that way. Instead, it would rate these two tweets as scoring +2 for positive (because of the words “great” and “happy”) and +2 for negative (because of the word “not” in both texts).
That’s a huge difference if you’re interested in unbiased and accurate data collection and analysis.
Indeed it is. So not only was this research of questionable ethics, it wasn’t even particularly good research. I tend to agree with Dr. Gohol that most likely these results represent nothing more than “statistical blips.” The authors even admit that the effects are tiny. Yet none of that stops them from concluding that their “results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks.” Never mind that they never measured a single person’s emotions or mood states.“Research” to sell you stuff
Even after the firestorm that erupted this weekend, Facebook unfortunately still doesn’t seem to “get it”, as is evident from its response to the media firestorm yesterday:
This research was conducted for a single week in 2012 and none of the data used was associated with a specific person’s Facebook account,” says a Facebook spokesperson. “We do research to improve our services and to make the content people see on Facebook as relevant and engaging as possible. A big part of this is understanding how people respond to different types of content, whether it’s positive or negative in tone, news from friends, or information from pages they follow. We carefully consider what research we do and have a strong internal review process. There is no unnecessary collection of people’s data in connection with these research initiatives and all data is stored securely.
This is about as tone deaf and clueless a response as I could have expected. Clearly, it was not a researcher, but a corporate drone who wrote the response. Even so, he or she might have had a point if the study were strictly observational. But it wasn’t. It was an experimental study; i.e., an interventional study. An intervention was made directed at one group as compared to a control group, and the effects measured. That the investigators used a poor tool to measure such effects doesn’t change the fact that this was an experimental study, and, quite rightly, the bar for consent and ethical approval is higher for experimental studies. Facebook failed in that and still doesn’t get it. As Kashmir Hill put it, while many users may already expect and be willing to have their behavior studied, they don’t expect that Facebook will actively manipulate their environment in order to see how they react. On the other hand, Facebook has clearly been doing just that for years. Remember, its primary goal is to get you to pay attention to the ads it sells, so that it can make money.
Finally, there is one final “out” that Facebook might claim by lumping its “research” and “service improvement” together. In medicine, quality improvement initiatives are not considered “research” per se and do not require IRB approval. I’m referring to initiatives, for instance, to measure surgical site infections and look for situations or markers that predict them, the purpose being to reduce the rate of such infections by intervening to correct the issues discovered. I myself am heavily involved in just such a collaborative to examine various factors, most critically adherence to evidence-based guidelines, as an indicator of quality.
Facebook might try to claim that its “research” was in reality for “service” improvement, but if that’s the case that would be just as disturbing. Think about it. What “service” is Facebook “improving” through this research? The answer is obvious: Its ability to manipulate the emotions of its users in order to better sell them stuff. Don’t believe me? Here’s what Sheryl Sandberg, Facebook’s chief operations officer, said recently:
Our goal is that every time you open News Feed, every time you look at Facebook, you see something, whether it’s from consumers or whether it’s from marketers, that really delights you, that you are genuinely happy to see.
As if that’s not enough, here it is, from the horse’s mouth, that of Adam Kramer, corresponding author:
Q: Why did you join Facebook?
A: Facebook data constitutes the largest field study in the history of the world. Being able to ask–and answer–questions about the world in general is very, very exciting to me. At Facebook, my research is also immediately useful: When I discover something, we can use this to make improvements to the product. In an academic position, I would have to have a paper accepted, wait for publication, and then hope someone with the means to usefully implement my work takes notice. At Facebook, I just message someone on the right team and my research has an impact within weeks if not days.
I don’t think it can be said much clearer than that.
As tempting of a resource as Facebook’s huge amounts of data might be to social scientists interested in studying online social networks, social scientists need to remember that Facebook’s primary goal is to sell advertising, and therefore any collaboration they strike up with Facebook information scientists will be designed to help Facebook accomplish that goal. That might make it legal for Facebook to dodge human subjects protection guidelines, but it certainly doesn’t make it ethical. That’s why social scientists must take extra care to make sure any research using Facebook data is more than above board in terms of ethical approval and oversight, because Facebook has no incentive to do so and doesn’t even seem to understand why its research failed from an ethical standpoint. Jamie Guillory, Jeffrey Hancock, and Susan Fiske failed to realize this and have reaped the whirlwind.
ADDENDUM: Whoa. Now Cornell University is claiming that the study received no external funding and has tacked an addendum onto its original press release about the study.
Also, as reported here and elsewhere, Susan Fiske is now saying that the investigators had Cornell IRB approval for using a “preexisting data set”:
I just heard back from the Facebook study editor on the question of whether researchers used an IRB. pic.twitter.com/kCoEp1LNJ8
— Adrienne LaFrance (@AdrienneLaF) June 30, 2014
Also, yesterday after I had finished writing this post, Adam D. I. Kramer published a statement, which basically confirms that Facebook was trying to manipulate emotions and in which he apologized, although it sure sounds like a “notpology” to me.
Finally, we really do appear to be dealing with a culture clash between tech and people who do human subjects research, as described well here:
Academic researchers are brought up in an academic culture with certain practices and values. Early on they learn about the ugliness of unchecked human experimentation. They are socialized into caring deeply for the well-being of their research participants. They learn that a “scientific experiment” must involve an IRB review and informed consent. So when the Facebook study was published by academic researchers in an academic journal (the PNAS) and named an “experiment”, for academic researchers, the study falls in the “scientific experiment” bucket, and is therefore to be evaluated by the ethical standards they learned in academia.
Not so for everyday Internet users and Internet company employees without an academic research background. To them, the bucket of situations the Facebook study falls into is “online social networks”, specifically “targeted advertising” and/or “interface A/B testing”. These practices come with their own expectations and norms in their respective communities of practice and the public at large, which are different from those of the “scientific experiment” frame in academic communities. Presumably, because they are so young, they also come with much less clearly defined and institutionalized norms. Tweaking the algorithm of what your news feed shows is an accepted standard operating procedure in targeted advertising and A/B testing.
This, I think, accounts for the conflict between the techies, who shrug their shoulders and say, NBD, and medical researchers like me who are appalled by this experiment.
For the first time, ScienceBasedMedicine.org has reached a million page views in a month, thanks to a surge in social media buzz. We’ve come close before, but finally pushed comfortably past that major milestone earlier this week. As of today, SBM served 1,051,943 pages to 649,315 visitors in the last thirty days. These are mainstream-scale numbers: SBM is now competing effectively with many popular websites about not-so-science-based medicine.
What articles are attracting so much attention? The traffic surge is powered by several popular recent posts, but mostly two of Dr. Gorski’s, about the Food Babe and John Oliver skewering Dr. Oz. Dr. Novella’s Food Fears post isn’t far behind. Other respectable slices of the traffic pie chart include Dr. Hall’s perpetually popular Isagenix post, and Scott Gavura’s coffee enema post — which also happen to be the two busiest SBM pages of all time, with Aspartame — Truth vs Fiction in third place.
SBM’s inaugural post was on January 1, 2008. Unfortunately, we have no traffic data until the middle of 2013. Since then, we’ve seen a doubling in average monthly traffic. It’s been a team effort, of course, but Facebook and Twitter have been huge factors in that steady growth. Bobby Hannum manages those accounts for us, and somehow manages to post and tweet for us almost every single day while going to medical school. If you haven’t already, please like and follow.
Next stop: a million views per week…
~ Paul Ingraham, Assistant Editor
I suspect there is more published about traditional Chinese medicine than any other SCAM. Here are some of the recent curiosities of TCM.
The little girls laughed about the germs, because they didn’t believe in them; but they believed about the disease, because they’d seen that happen. Spirits caused it, everyone knew that. Spirits and bad luck. Jack had not said the right prayers.
- Oryx and Crake
I long ago gave up on the idea that there are a finite number of pseudo-medical treatments. Anything a human can imagine will probably be used as a SCAM intervention. I remain amazed at the permutations that occur in the pseudo-medical world, not unlike the mix and match bioforms in Oryx and Crake.
Not everyone knows basic anatomy and physiology that allows for understanding of disease. Instead, people often rely on metaphor and magic for their understanding, especially in the world of pseudo-medicine. Sympathetic magic lies at the heart of many SCAMs.
The classic example is rhino horns for impotence. But there are other examples. What makes blood flow to a body part? Heat. What is hot? Fire. Why are you impotent? Lack of blood flow. Put it all together and it spells fire: Set your crotch alight to cure impotence. Really.
It is all about keeping blood flow moving rapidly. The warmth from the burning towels speeds the blood through the body and it makes me perform 50% better in bed.
The accompanying photo of a flaming groin is a prelude to a What’s the harm? entry or a most unpleasant admission to the burn center if it goes horribly wrong.
There is nothing on the PubMeds concerning fire therapy and little on the internet. There are several versions fire therapy. It
is much more advanced and powerful than Moxibustion.
Given the total uselessness of moxibustion, I suspect the being several times more powerful than nothing is still nothing.
While the current photographs are from China, it allegedly originated in Tibet where:
The thermotherapeutic procedure consists in the application of a herbal product with a specific formula for each disease under treatment on the area of the affected organ. The area is covered with a towel soaked in alcohol and it is then lighted [sic]. The heat produced by the burning of the alcohol is easily born by the patient. The procedure is stopped when the patient announces a disconfort [sic]. The vasodilator effect produced by the fire heat accelerates the local blood circulation and the local metabolism. Thus, the curing substances of the herbs will be carried directly to the sick organ and they will act immediately at local level.
Its alleged mechanism of action is because
All health problems relate microcirculation deficiency. At capillary level, the blood become stagnate [sic], then toxin will be cumulate [sic], using Fire Dragon Therapy can improve the microcirculation and to remove stagnate [sic] toxins.
Yep. Toxins. And:
The vasodilator effect produced by the fire heat accelerates the local blood circulation and the local metabolism. Thus, the curing substances of the herbs will be carried directly to the sick organ and they will act immediately at local level.
Like most pseudo-medicines, there is no process for which fire therapy cannot be used, including as a beauty aid.
General fire dragon therapy can help cure the following disorders: Indigestion, low metabolism, low temperature, melancholy, pain caused by stress and tension, insomnia, anxiety, fear, panic attacks, stomach distension, vertigo, hiatus hernia, benign tumors, cold bile disease, joint pains, arthritis, bone deformation, joint inflammation, superficial fever (empty fever), post-menopause syndrome and nerve inflammation (sciatic nerve, neurological disorders, etc.). In short, fire dragon therapy is good for diseases which manifest from phlegm and wind humoral disorders.
Phlegm and wind. Good for teenage boys? And are the effects? Of course
Huo Long therapy produces many side effects, but all of them are positive.
The procedure as described is relatively safe: the towels are wet and it is the alcohol vapor above the towel, not the liquid alcohol on the towel, that is burning. As long as it doesn’t ignite the clothes or the environment, it poses little risk.
But it sure looks stupid to me.Pop Pop
There is often the suggestion that you should consult a licensed and certified acupuncturist, not just any old needle wrangler down the street, to practice their magic on you.
I don’t know. I would think that licensed and certified magic is no more effective than unlicensed and uncertified magic.
Maybe they might know a bit more if certified, but the pass rates for acupuncture boards are not impressive, at least in California.
In February 2014, only 62% of first-time test takers in California passed and overall 49% passed. Gives one pause.
There are several sites on the internet with Acupuncture Board questions and flash cards. I took the tests and missed all the questions. The questions often seemed goofy to me, but then I find all of the theory and practice of acupuncture goofy.
Knowing the crossing point of the spleen meridian and the throughfare vessel or that cupping removes putrefaction and promotes granulation somehow has no relevance to what I would consider biomedical reality.
I wonder, as an aside, what the result of board certification will be on the practice of acupuncture. There are a huge variety of styles (by country and by practitioner), acupoints, variations (bee venom or cat gut added) and techniques. There are more acupuncutures than acupuncture, perhaps as many forms as there are practitioners. I also note that some schools have high pass rates and others do not. I predict with board certification the variability of acupuncture will decline as, at least in the US, they teach to the test.
The biomedical sample questions were often simplistic and, if indicative of the knowledge base of practitioners who want to be primary care providers, scary. I was reassured to find questions concerning proper hand hygiene and sterilization of needles, although I am skeptical about their application.
What I did not find (and that doesn’t mean they were not there; it was not an exhaustive search) were questions testing whether acupuncturists had an understanding of the importance of the anatomy under their acupoints. Evidently not, for if you search acupuncture and complications on the PubMeds you will find seven pages of articles, some of which have titles that suggest needle points are going where they should not:
Some of those are impressive. It takes real effort to get deep enough to pop a stomach or heart. I would hard-pressed to accomplish such a result deliberately.
Those are the results of the first two pages of search results and does not include my all-time favorite:
“I can’t figure out how the needle got into there,” Dr Sung Myung-whun was quoted as telling reporters at the hospital after the operation. “It is a mystery for me, too.”
Um, maybe because acupuncturists don’t really know what they are doing when they stick needles in people? They do not really know how deep they can safely push a needle since they have no understanding of anatomy?
There is a push to include acupuncturists as primary care physicians. Given the nature of their training and what it includes to pass their Boards (mostly magic) and excludes (reality and anatomy), I would not be skeptical of their abilities.CIGO: Cochrane In, Garbage Out.
The Cochrane reviews. They give me pause. I understand the need and utility of systematic reviews and meta-analysis. They can give a nice overview of a topic and suggest the utility or lack thereof of a given therapy. But they are not definitive and suffer from the problem of GIGO: garbage in garbage out.
GIGO is especially pertinent when the methodologies of systematic reviews are applied to pseudo-medical interventions that are divorced from reality.
My colleagues and I have written extensively about acupuncture (we have collected many of the essays in book form available at Amzazon. Hint. Hint.).
The summary of acupuncture: it is not based in reality (there are no meridians or acupoints) and well-designed clinical trials suggest the acupuncture only works for subjective endpoints if the patient thinks they are getting acupuncture and believe it to be effective. It does not matter where needles are placed or even if needles are used at all. From a prior plausibility perspective, any positive effect from acupuncture is likely due to a combination of bias and poor study design.
But that never stops the Cochrane collaboration, who will run anything and everything through their grinder to produce a meta-analysis sausage. Unfortunately, unlike sausage, I often know what goes into the meta-analysis.
Acupuncture is the rodent hair and insect parts in the bratwurst that is “Acupuncture for treating acute ankle sprains in adults.” Can I beat a metaphor to death or what? Anything and everything that calls itself acupuncture is included; no form was ignored:
We included all types of acupuncture practices, such as needle acupuncture, electroacupuncture, laser acupuncture, pharmacoacupuncture, non-penetrating acupuncture point stimulation (e.g. acupressure and magnets) and moxibustion. Acupuncture could be compared with control (no treatment or placebo) or another standard non-surgical intervention.
Acupuncture, as is of then the case, is anything they want it to be. Insert Humpty quote here. Talk about your “heterogeneous group of acupuncture and quasi-acupuncture.” And, what surprise, they did not find any evidence that acupuncture, however defined, was effective for acute ankle sprain:
The currently available evidence from a very heterogeneous group of randomized and quasi-randomised controlled trials evaluating the effects of acupuncture for the treatment of acute ankle sprains does not provide reliable support for either the effectiveness or safety of acupuncture treatments, alone or in combination with other non-surgical interventions; or in comparison with other non-surgical interventions.
Of course, reality will never provide reliable support for either the effectiveness or safety of acupuncture treatments, because acupuncture is based on fantasy and its practitioners don’t really know what they are doing. When seen through the lens of the information provided by prior high-quality studies of acupuncture, it would suggest the following conclusion promotes a waste of time and money:
Future rigorous randomised clinical trials with larger sample sizes will be necessary to establish robust clinical evidence concerning the effectiveness and safety of acupuncture treatment for acute ankle sprains.
But for some reason the Cochrane group always suggests more studies. At least they did not suggest that it may be worthwhile for ankle sprain patients to test on an individual basis whether therapeutic acupuncture is beneficial for them.
They can only be that lunkheaded once. I hope.TB or Not TB
As regular readers are aware, I am an Infectious Disease doctor and have been Medical Director of the infection control program my hospital system for 24 years.
It is impressive how Murphy rules in infection control. If something can cause an infection, it will cause an infection given the right circumstances.
Needles sticking the skin can drag in bacteria from the skin of the patient, from the hand of the practitioner or even from the slight aerosolization of spit from the practitioner, dragging oral bacteria into spinal fluid. It is why we wear a mask and gloves for many injections.
Careful infection control technique is not high on the to-do list of acupuncture practitioners and a search of PubMed will result in a long list of mostly-preventable infections. As I think about it, since there is no real indication for acupuncture, they are completely preventable infections.
And now there is a report of cutaneous TB: “Analysis of 30 Patients with Acupuncture-Induced Primary Inoculation Tuberculosis“
The use of Chinese acupuncture needles which are able to deeply penetrate into the tissues surrounding tendons and nerves provide an ideal route for the inoculation of tuberculosis. The patients in our outbreak underwent acupuncture twice daily for two weeks. This high degree of potential exposure may explain why there were no cases of spontaneous healing.
From an infection control perspective, it was interesting that
Despite the unsuccessful identification of the source of contamination, it is apparent that these infections were linked to acupuncture and moxibustion, because the 30 patients had the same epidemiological characteristics.
Most of the 30 patients had multiple skin infections, but the lesions were located to the sites of acupuncture and electrotherapy. Lesion severity and drug reactions in individual patient were similar, but we did not know whether these multiple lesions were independent or the result of the inoculation infections in the wounds via hemo-disseminated Mycobacterium tuberculosis.
And some patients had metastatic infections:
Although, occurrence of the three patients with meningeal and pulmonary tuberculosis and two patients with knee tuberculosis had confirmed the hemo-disseminated ability of this primary inoculation Mycobacterium tuberculosis to other tissues and the compartments.
And they finish with a little ironic humor:
Mycobacterium can easily spread without proper microbiological control of these procedures. To this end, it was recently suggested that herbal medicine and acupuncture professions should also develop a system of statutory regulation which should help prevent these issues.
Those whose world view holds that disease is due to the fanciful constructs of meridians and chi are unlikely to pay close attention to germs and their potential spread. In medicine we are fortunate that it is usually hard to infect other humans, especially if you are punctilious about applying the concepts of infection control. Too bad infection prevention is not part of their understanding.