Tags

, , , , , ,

Scientific research has not been able to dispel the persistent criticism opposing the homeopathic treatment approach. Critics continue to proclaim that homeopathy simply cannot work because scientific research has not been able to provide evidence that there is an active agent contained in the ultra highly diluted and potentized remedies. Yet, despite the fact that the modus operandi, the action mechanism of homeopathy, is still not clear and until today cannot yet be explained, it is indisputable that patients have, and are, finding recovery from a treatment with homeopathic remedies.

The lack of evidence of efficacy though, is not only due to the lack of an explanation of how this treatment approach works, but in principle appears to be due to the weaknesses and flaws that originate from the methods used to trial homeopathy. The primary testing tool used in research is the RCT, the randomised controlled trial. It is considered the “gold” standard of conventional scientific research because it is believed to minimize variables that may be accountable for external impacts on the trial outcomes [1]. Any intervention seeking to be acknowledged as safe and efficacious requires to be evaluated by this model of investigation [1]. Yet for investigations into holistic interventions, that are sensitive of a holistic symptomatology and an individualized appraisal, as is the homeopathic treatment, this methodology is unsuited. Until today, research into homeopathic treatment interventions has disclosed weaknesses and flaws where the RCT was used to conduct this investigation, and trials have continued to report inconsistent outcomes of treatment efficacy.

The discrepancies to the RCTs´ application to CAM interventions may lie in the fundamental modes of conduction that are inherent to the design, and are already denoted by the very nomenclature of the methodology.

Randomization refers to the trial participants´ allocation, by chance, to the respective treatment or control group. The participant does not know which group he is in, or whether he is receiving the trial medication or the control, which usually is a placebo [2]; a remedy that is devoid of an active treatment substance, but otherwise indistinguishable from the trial medication [3].

Randomization complicates a study seeking to investigate a holistic treatment intervention for various reasons. Patients that seek homeopathic treatment frequently chose homeopathy after they have tried everything else and have not found alleviation [4]. They make a conscious and informed choice to see a holistic practitioner and have hopes, beliefs and expectations of this treatment [5]; [6]; [7]; [8]. In a randomized controlled trial, patient choice is not, cannot, be respected and the participant has to be allocated by chance, in order to avoid potential bias due to, for example, the above mentioned distinct and personal opinions and judgements of the individual participants. Yet, within these “external” aspects influential on the participant, may lie an adjunctive curative potential that in a holistic treatment becomes a valuable factor supplemental to the therapeutic impact.

A trial situation therefore, does not reflect true clinical practice [9], as by randomizing participants, this decisive factor is removed.  The belief to be actively choosing a treatment that coincides with ones faith and experience is a psychologically potent igniter to recuperation sentiments and dispositions that may spark self-healing influences within the individual [7]; [8].

These factors are the so called, non-specific effects, impacts that are in not related to the treatment intervention, but may exert an influence on the study findings [5]; [7]; [8]. As such, for example, participants may believe they are in the placebo group, and thus may make negative judgements of the experiences they have during the trial, simply because the trial situation feels different to the normal experiences of clinical practice. The same holds true for the closer engagement of the practitioner with the patient, that is absent in the process of a trial [8]. This different experience too may have an impact on how the patient in a trial situation perceives his participation and the effect of the intervention or of the placebo depending on the group he or she is allocated to [7].

It cannot be ignored that such non-specific effects exist. Therefore, a research method that seeks to minimize these influences, cannot deliver a true replication of a situation that we have in customary clinical practice. It is though, exactly this that the randomized controlled trial seeks to do. The intervention is extrapolated from its common context in which it is habitually applied and experienced, and as a consequence this potentially may flaw, at the very root, the outcomes reported by an RCT.

While the specific effects are those considered to come solely from the medicinal substance investigated [10], such non-specific effects are commonly not acknowledged in the spheres of conventional scientific research, and are therefore generally attributed to the placebo, the inactive control in a study [11].

The control group is used to isolate the effects that come only from the tested intervention; the effects specific of the treatment investigated [10].  The participants of the placebo group are subjected to the same procedures as the participants of the treatment group. The only difference is that the medication given to the placebo group is inert, that is, is lacking the active ingredient [3]. The effects noticed in this sample are then subtracted from the study outcomes, purportedly leaving only the impact of the intervention on the treatment group. So it has been assumed for a long time.

Yet, it is increasingly being acknowledged that there is a placebo effect, an impact of the non-specific factors. The RCT is ignorant of these, as its design has not developed to be sensitive to such effects. Its focus is on the specific impact of the trial medication only. Yet, even by telling a participant he has a 50% chance of being in the treatment group, he or she may be influenced in one way or another [7], and as a consequence his or her reported outcomes may be affected. This delivers a measurement, as in no way, an individual is completely neutral and isolated. Therefore, although the trial situation is one different from true clinical practice, and the trial experiences differ to habitual therapeutic settings, a participants own thoughts, habits, common senses, beliefs, experiences and those instigated by others, or by his surroundings do have an impact [5]; [6]; [7]; [8].

Most RCTs have been conducted without giving any value to the influence of a placebo, yet a placebo effect occurring in a trial is measureable. By including a third study arm in a trial, a potential placebo effect can be calculated. This arm must consist of a group that is left untreated, frequently denoted as a `waiting list´ group [3]. The measurement of the placebo effect is then achieved by comparison of the findings in the placebo arm to the outcomes in the non-treatment group. With this evaluation, the findings obtained from a trial investigating CAM interventions, would potentially deliver outcomes that were more genuine and probably more consistent, as they would abstract a major variable that is too often not considered in research.

For the practices of CAM though this still means that the entire scope of the impact of the non-specific effects is still not accounted for, as randomization eliminates these. But while a third arm could deliver better measures, this procedure does not diminish other weaknesses of such a trial. There is further, significant, potential for falsification of studies investigating CAM, that may originate from the incorrect application of the fundamental principles of the investigated health care approach and are too often not respected.

Therefore, any research into the holistic alternative therapies, and homeopathy in particular, using the RCT is doomed to deliver outcomes that are weakened by the methodological design of the tool used, and it is consequently not surprising that the results of such trials and studies are inconsistent.

References:

[1] Golden, I. (2012). Beyond Randomized controlled trials: Evidence in Complementary Medicine. Journal of evidence-based complementary & alternative medicine, 17(1), 72-75. doi: 10.1177/2156587211429351

[2] Corrigan, P. & Salzer, M. (2003). The conflict between random assignment and treatment preference: implications for internal validity. Evaluation and program planning, 26(2), 109-121. doi: 10.1016/S0149-7189(03)00014-4

[3] Horn, B., Balk, J. & Gold, J. (2011). Revisiting the sham: Is it all smoke and mirrors?, Evidence-based complementary and alternative medicine, 2011, 4 pages. doi:10.1093/ecam/neq074

[4] H:MC2. (n.d). A check without balance. Homeopathy: Medicine for the 21st century. Retrieved May 16, 2013, from http://www.hmc21.org/#/check-without-balance/4543591988

[5] Kaptchuk, T., Stason, W., Davis, R., Legedza, A., Schnyer, R., Kerr, C., Stone, D., Nam Hyun, B., Kirsch, I. & Goldman, R. (2006) Sham device v inert pill: randomized controlled trial of two placebo treatments. BMJ,332. doi: 10.1136/bmj.38726.603310.55

[6] Nuhn, T., Lüdtke, R. & Geraedts, M. (2010). Placebo effect sizes in homeopathic compared to conventional drugs – a systematic review of randomised controlled trials. Homeopathy, 99, 76-82. doi: 10.1016{j.homp.2009.11.002

[7] Relton, C. (2013). Implications of the ˈplacebo effectˈ for CAM research. Complementary therapies in medicine, 21(2), 121-124. doi: 10.1016/j.ctim.2012.12.011

[8] Teixeira, M., Guedes, C., Barreto, P. & Martins, M. (2010). The placebo effect and Homeopathy. Homeopathy, 99, 119-129. doi: 10.1016./j.homp.2010.02.001

[9] Vickers, A. (1995). What conclusion should we draw from the data?. British Homeopathic Journal, 84(2), 95-101. doi: 10.1016/S0007-0785(95)80039-5

[10] Walach, H. (2001a). Das Wirksamkeitsparadox in der Komplementärmedizin. Forschende Komplementärmedizin und klassische Naturheilkunde, 8, 193-195. doi:10.1159/000057221

[11] Enck, P. & Klosterhalfen, S. (2013). The placebo response in clinical trials – the current state of play. Complementary therapies in medicine, 21(2), 98-101. doi: 10.1016/j.ctim.2012.12.010

Advertisements