Instrument soft tissue mobilization (IASTM) Review Article

Hybrid Performance Method
October 7, 2019 (4 weeks ago)
Categories

Read time ~7 minutes

Written by Ian Kaplan

Instrument soft tissue mobilization (IASTM) is the medical term used to describe a centuries-old manual therapy technique that involves scraping a tool across the skin. Couched in “the wisdom of tradition”, many alternative providers picked up the technique and the ancient narratives surrounding its effects. Some basic research investigating possible mechanisms was strategically floated at expensive seminars, at least in part to sell expensive metal tools to practitioners. These seminars perpetuated many claims now understood to be bogus. Language like, “Breaking up scar tissue”, “removing adhesions”, “removing bad blood”, “detoxifying lymph”, was par for the course. Not only were these claims unsubstantiated remnants of ancient ideas, akin to aether and black bile (clearly neither of these exist, Aristotle and Hippocrates were not omniscient beings), it’s not clear how they are relevant to patient outcomes. As Ernst said so eloquently in his defense of the modern scientific method in medicine

“Clinical effectiveness is a falsifiable hypothesis applicable to all interventions. Those who deny this fact may have reasons for trying to mislead us. The notion that ‘Bach Flower Remedies’ have healing power is not a political, sociological, or philosophical collision. It is a claim that can be tested in rigorous clinical trials”

“wisdom” and pure logic are terrible predictors of outcomes in a complex uncertain world. We must test, or attempt to falisify, our hypotheses in the common language of science to establish a cause-effect relationship between treatments and outcomes. It’s the best way to test an idea and compare it against a world of other possibilities. 

In the hopes of finding a place in mainstream healthcare for musculoskeletal conditions, there has been a push for more rigorous outcome-based trials.  In March, Nazari and colleagues published the largest review to date on all available trials and the results were alarming.

The review included 20 trials with 86 total reported outcomes of function, pain, range of motion, grip strength, pressure sensitivity, and performance. Several studies reported multiple outcomes that reached the minimum threshold of “statistical significance”, meaning the researchers concluded that there was only a 5% probability that the difference between groups was random and not an observed treatment effect. However, only 3 indirect outcomes met a threshold for minimal clinically important difference (MCID).  A .01% difference is imperceptible clinically, but may be observed mathematically, so in translating results to practice we should set a higher bar than “statistical significance.” Even then, we have good reason to be skeptical of these results.

Since the study methods were poorly controlled, it wasn’t possible to pool the data to estimate a larger sample population as is customary in a meta-analysis. Instead, all the studies were grouped according to similar methodology and compared using a forest plot. A sham or placebo-controlled randomized trial is the most rigorous design because it controls for the contextual (placebo) effects of treatment. Ideally, the only difference between groups would be an exposure to a specific treatment effect. In the comparison of placebo-controlled trials, arranged according to outcomes at different time points, it’s clear that IASTM had no significant effect (all the bars cross the 0 effect line). This suggests that there is no difference between improvements in the placebo group and the active treatment group, and any observed improvements can be explained by contextual effects present in both groups. This already begins to undermine the IASTM narrative, as empirical evidence carries more intellectual weight than “good experiences in the clinic”.

The minimal effects demonstrated across the studies in the review are more than likely overstated because of deeply embedded, obvious biases. The review authors found concerning practices throughout the review. The only trials that reported large effects of IASTM were published in suspected predatory journals. These journals are notorious for publishing virtually anything in exchange for a fee (a bribe), ethically compromising the peer review process (Cook, 2018). These journals undermine scientific integrity and the ethics of research into commercial products like IASTM tools. Only 2 trials were pre-registered, meaning the investigators were free to modify the non-registered studies to achieve statistical significance and overstate positive results, which is quite easy to do with minimal manipulation of otherwise randomly generated data (Simmons et al., 2011). 60% of the studies did not disclose sources of funding, leaving questions of conflicts of interest unanswered, which is worse than acknowledging them.

This review was the first of its kind not because it was the largest one to date, but because it included a risk of bias assessment and quality appraisal of each trial. Every trial demonstrated a high risk of bias and low quality. Specifically according to the Cochrane risk of bias tool, each study had a high risk of performance bias. Simply put, it wasn’t clear if the participants and the provider/assessor were adequately blinded. Similarly, almost all trials may have suffered from severe selection bias if the group allocation process was not concealed adequately. People more likely to believe they might benefit from the treatment might select the treatment over the control. People might be aware of whether they are in the treatment or the control group. The fact that almost all were not pre-registered means that the investigators could change the design on the fly, selectively report favorable data, data dredge, or p-hack to find results they deemed favorable. All these structural features present significant problems and cast doubt on all the validity of all the findings, which are still not impressively convincing.

Ioannidis proposed that many positive findings in research are actually “false-positives”. The data indicates a statistically significant result where none actually exists. A true null-hypothesis was falsely rejected. This is also called type one error. Think of a case of a new drug. If we believe a potentially lifesaving drug works when in fact it doesn’t, that’s the worst possible mistake we can make. A well-designed study incorporates many tools to protect against this type of error. In poorly designed research, type 1 errors may be the rule rather than the exception.

We should suspect a false- positive if we see one of several relevant red flags.

  1. The very small study– these studies can be easily manipulated repeatedly to find a reportable level of significance 
  2. Small effect sizes– as opposed to the statistical significance, which measures the likelihood of an effect, the effect size measures the magnitude of that possible effect. Often imperceptibly small measured effects mean that not only is the treatment effect likely trivial at best, but the results could mistake statistical error/manipulation and/or unexpected confounders for a treatment effect
  3. The greater the flexibility in design and analysis, the less likely the findings reflect a true effect– when studies demonstrate wildly inconsistent procedures, are free to measure whatever they want, and analyze the data (or worse not report how they analyzed the data) false positives are quite easy to generate.
  4. The greater the financial and other interests and prejudices, the less likely the research findings are to be true– Conflicts of interest are a real problem in science, but systemic prejudice is perhaps more pervasive. People rely on authority and expert opinion, despite evidence that these sources of information are often unreliable. Appeals to the ideas passed down by authority figures run the risk of perpetuating false dogma to the next generation

Almost all of the studies presented in this review are prone to false-positive results in the ways described above and each demonstrates an incredibly high risk of bias according to current guidelines and risk assessment tools. Yet they fail to demonstrate convincing positive results. 

It’s not that no one should never use IASTM, but it’s unreasonable to say that it is an “evidence-based” tool when the meta-analysis data powerfully claims the opposite. Negative results deal a substantial blow. Absolutely don’t refute manual therapy entirely or conclusively disprove anything, but they do call into question the old narratives that are still packaged with these tools. Often, people defend the technique with notions that research is disconnected from the realities of practice, or that they are somehow different than the study populations. While people may get better with IASTM, its often done with other treatments, and people reliably get better with time and with placebo/sham treatments. People are not averages and some might get better results than others, but we cant reliably understand “why” from experience alone. Everyone should benefit similarly if the old dogma is at all true. Everyone has a similar body, presents with similar “dysfunction”, and responds similarly to scraping. The simple story is the one that assumes all people are pretty much the same, not the research. The mechanism should be effective regardless of individual differences, and that effect should be apparent. The variability in outcomes is not a valid argument against research, it actually further undermines the simple scraping narrative. 

Many people will continue to use IASTM, it has been used in one form or another for thousands of years. Maybe higher quality evidence will provide more insight, but it is unlikely to support the prevailing dogma. In the pursuit of best practices, the decision to continue to use a therapy should not only be based on whether results are observable but whether a different intervention (or no intervention) could achieve even better results more efficiently. This tough process of self-appraisal is the sin qua non of quality, ethical, modern healthcare.

Article

Ernst, E. (2009). Complementary/alternative medicine: Engulfed by postmodernism, anti-science and regressive thinking. British Journal of General Practice, 59(561), 298–301. https://doi.org/10.3399/bjgp09X420482

Cook, C. E., Cleland, J. A., & Mintken, P. E. (2018). Manual Therapy Cures Death: I Think I Read That Somewhere. The Journal of Orthopaedic and Sports Physical Therapy, 48(11), 830–832. https://doi.org/10.2519/jospt.2018.0107

Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant. Psychological Science, 22(11), 1359–1366. https://doi.org/10.1177/0956797611417632

Ioannidis, J. P. A. (2018). Why most published research findings are false. In Getting to Good: Research Integrity in the Biomedical Sciences (Vol. 2, pp. 2–8). https://doi.org/10.1371/journal.pmed.0020124

RELATED ARTICLES

Leave a Reply

Your email address will not be published. Required fields are marked *