(migrating over a couple of my entries from my grad school's public health blog, Target Population)
When entering medical school, one of the first principles taught to aspiring physicians is primum non nocere, or as we may more commonly know it, “first, do no harm.” While it obligates doctors to avoid intentionally or unnecessarily risking their patients’ well-being – something we can all agree is positive – determining what sort of treatment they are obligated to provide is a little trickier.
CNN reported on a recent study in the Journal of General Internal Medicine, which found that 45% of internists in the Chicago area had used a placebo in clinical practice at some point in their career. Placebos, usually in the form of sugar pills or another inert agent, are traditionally used as controls in studies testing the effectiveness of new treatments. The aim is to avoid patients reporting symptom improvement simply because they received medication and expect to feel better.
This “placebo effect” was mentioned in the scientific literature as early as 1920, and has been documented and tested since Henry Beecher’s 1955 paper “The powerful placebo.” Its strength is still being debated, but according to the JGIM study, 96% of physicians believed that placebos could have therapeutic effects, and up to 40% believed that positive physiological effects were possible for certain health conditions.
However, the fact remains that only 4% of those who used placebos in clinical practice admitted it to the patients involved. While knowing that your medication is inert would certainly diminish its potential effectiveness as a placebo, what about the motives involved? The majority of physicians administered them to patients who were upset, in order to calm them down. But once the patient is calmed down, what prevents doctors from admitting to the treatment after the fact? The ethics of keeping the patient in the dark even then is at questionable at best.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment