8 December 2012

The challenge of scientists

During research I've been doing for an article I'm writing on breasts and breastfeeding, I've had the opportunity to interview two scientists. It was such a pleasure to talk to them: they are accomplished academics, both from Otago University, who struck me as genuine, humble and dedicated to their work.

The birth-day of little Anna, breast feeder extraordinaire.
They happily made time for someone (me) they'd never met to ring them to ask questions they'd probably been asked plenty of times. Both of them, without prompting, immediately did some literature searches to address questions I'd asked that they weren't experts in, and emailed me the papers within the hour.

But how hard it is for scientists to satisfy everyone! One of them is a nutrition researcher who looks at nutrients in breast milk. There was talk of supplementing mothers and/or babies with vitamin D, iron and folate because of the evidence that quite a lot of babies are at risk of not getting enough, and the consequences of that are pretty dire. She's studying the effects of New Zealand's mandatory fortification of bread with folate, about which there was an enormous backlash.

That night in a social setting I talked to a woman who was frustrated that doctors don't recommend nutritional supplementation for health problems. They're only interested in drugs, not vitamins, she said.

It seems to me that they don't recommend things that aren't proven beyond reasonable doubt. For example, vitamin E was touted as a great supplement. There was a trial to see if it prevented prostate cancer: it was stopped early because it wasn't working, and a later follow up showed that men taking it were significantly more likely to get prostate cancer. And there are lots of examples like that.

Proving something is usually horribly complicated. It's a bit like planning a renovation that seems quite simple. Then the walls are ripped off and the wiring and plumbing are revealed to be a mess, half the tradesmen don't turn up on time and those that do turn up do a bad job. Then it costs twice as much as you'd planned and takes three times as long.

For example, one paper I was sent reviewed many studies looking at whether breast feeding reduced the risk of breast cancer. Simple question? The results were all over the place. Most studies concluded it did, and some that it didn't. But there was no standardised measurement between the studies; for example, some measured no breast feeding compared to any breast feeding - but some mothers may have only breast fed for a fortnight. Others made the comparison with cumulative duration across all children, while others took the average length per child.

There were hints that it is prolonged breast feeding that reduces the risk, but studies from most western countries contained so few women who did so that the effect didn't show very clearly. In China, however, apparently more than half of all women breast feed for at least three years (per child, I think; what lucky babies!). Those women had a 64% reduction in breast cancer risk. It seems to only be a reduction for premenopausal breast cancer though.

That paper was summarising many studies over many years, and the conclusions were far from certain. So much work, so little certainty.

I feel grumpy when doctors prescribe things that aren't proven. My husband's GP suggested he buy some glucosamine for a sore thumb joint that may have been arthritis. Soon after I read that the summary of evidence suggests it doesn't work particularly well. See ya later, $50. His thumb still hurt.

However, I'm happy to be a little bit experimental. I take a certain zinc supplement if a sinus infection is imminent and it works (unless the powder is old and smells wrong). I went from being terribly ill with the infections and considering sinus surgery to never suffering from them.

I've also been taking iodine drops after I read iodine may help fibrocystic breast disease (sore lumpy breasts on a monthly basis), and that women with fibrocystic breast disease are at higher risk of breast cancer. Two months later the sore lumpy breasts were gone.

I knew in advance that New Zealand-sourced food is generally low in iodine and zinc, so I knew I had a bit of room to move, which is important because too much of these minerals is toxic.

That's one of the benefits of knowing how the scientific process works - you know where to find the information and have some basis on which to judge its validity. There is an awful lot of impressive-sounding 'pseudoscience' on the internet. I'm sure I've even been sucked in by some.

Here's what I look for in a study (pubmed is my first port of call):
- research carried out by researchers at institutions like universities or hospitals
- published in a well-known, reputable peer-reviewed journal
- a control (placebo) group, for the purpose of comparison (because the placebo effect is massive - often people taking the placebo report an improvement, too, so the effect needs to be better than placebo, not better than no treatment.)
- randomisation, so that who gets the treatment and who gets the placebo is determined randomly. Otherwise, there may be unconscious bias (e.g. the researchers might give the treatment to people based on something like age, severity of symptoms, etc.)
- double blind, which means that neither the patients or the researchers giving the treatment know who is receiving the treatment and who is getting the placebo. There will, of course, be a spreadsheet somewhere that reveals which is which, but no one involved with giving the treatment will know, so no bias is introduced that way.
- the study has as many participants as possible, because the bigger the group, the more representative the sample. (e.g. five out of thirty children in one classroom might have red hair but if you counted the children across ten different schools the true rate of red hair might be only two out of thirty - hypothetical of course.)
- they have accounted for other causes as much as possible. Generally this means recording income level, age, marital status and goodness knows what else, then doing statistical jiggery-pokery to make sure those things aren't having an effect themselves or influencing the effectiveness of the treatment.

This covers what is generally considered the 'gold standard' of research: placebo-controlled, randomised, double-blind. It's there to remove bias and make judging the effectiveness of the treatment as objective as possible. Because without it we humans are very good at being subjective. Witness the decades (centuries?) of bleeding sick people! They thought it helped!

Even this isn't usually enough to persuade other scientists. The results generally have to be replicated in a number of different studies - and hopefully those different studies use exactly the same measurements, but that is definitely not always the case.

And it's really important to acknowledge that our knowledge about things is always changing and updating, and that's the only way that the pursuit of any complicated 'truth' can ever work. A drug company immediately withdrew a vaccine a few years back because the babies were getting a horrible side effect (a gastrointestinal problem, from memory). When the post-menopausal hormone replacement therapy results (increased cancer and heart disease) came out ten or so years ago, the results were immediately made widely known and the majority of women stopped taking it. So scientists do backtrack and withdraw when proven mistakes are made, but they do their best to make sure they don't have to.

We humans are, however, so imperfect! The process of gathering knowledge is also imperfect. I do know that all the scientists I've worked with over the years are totally dedicated to getting closer to the 'truth'. Getting there, however, is harder than it first seems.

No comments :

Post a comment

Related Posts Plugin for WordPress, Blogger...