What does it mean when we read that 'studies show . . . . X,
Y and Z?' Are we given enough information in these reports to
really understand what the study shows?
There are two main types of studies done by scientists. One
is when they select a group and test something on the group
and look at what happens. The second is when they look at how
people live, and the illnesses they get, and try to establish
what people are doing which gives them a greater risk of
getting some condition. In its simplest form, they will look
at as many people as possible who have a condition, and then
see if they can find a common thread.
When scientists want to test if a particular medicine or
procedure works, they run a trial. A proper study can only be
considered conclusive if a double blind trial is run. This is
where you take a group of individuals, the larger the group
the more reliable the results, and you match them as much as
possible for age, gender, other health issues, race,
financial status, education and anything else you can. The
more matched the group, the less complicated it is to
understand and rely on the results. Then the group is split
at random, and one group is given the real medicine, and the
other group is given a placebo.
A placebo looks exactly like the medicine, but is usually
made of some sort of sugar, with a taste to fool the patient
that he is getting real medicine. To make the results even
more reliable, the doctor who gives out the medicine also
does not know if he is giving the real one or the placebo.
This is called the 'double blind'; both the patient and the
administering doctor are 'blind' as to the type of tablet
being used.
The researchers then look at the improvement in both groups.
The interesting thing is that we are not surprised to
sometimes see an improvement in some people in the group
given the placebo. What we look for are statistically
significant differences between the two groups. Many studies
are stopped early, on ethical grounds, when early results
show such a marked improvement in the medicated group that it
is considered immoral for the other group to miss out on that
benefit, especially in cases where the difference may be a
matter of life and death.
A correctly constructed study should have clear criteria for
defining the 'problem' and what constitutes improvement. It
is best when these results involve numbers that can then be
tabulated and compared statistically, rather than general,
subjective, descriptions of 'feeling better.'
An example of this type of study is a trial to look at the
ability of a certain medicine (Medicine A) to control high
blood pressure. We find a group of a hundred overweight,
white, middle-class men, fifty of whom receive Medicine A,
and fifty of whom (the control group) take the placebo. The
results might show that those who took Medicine A had lower
blood pressure after a month, on average, compared to those
in the control group (who took the placebo). Interpreting
these results does not take a lot of skill, and is quite
straightforward.
However, in the second type of study, the researchers try to
find links between certain lifestyles (smoking, exercise,
watching TV, eating raw fish) and healthy blood pressure. In
a study such as this you get results like, "The Japanese, who
eat a lot of raw fish, have lower blood pressure than
American men who do not eat raw fish." This can then be
interpreted as meaning 'raw fish lowers high blood pressure.'
However there may be so many other differences between
American and Japanese diets and lifestyles that the raw fish
cannot be shown, by this study, to have a direct causal link
with lower blood pressure.
It's like understanding the difference between the following
two statements: 1) "Most people who get lung cancer are
smokers" and 2) "Most smokers get lung cancer." These
statements are not the same. The former shows a link between
smoking and lung cancer, and suggests that if you smoke you
should give it up, as you may get lung cancer. However, the
latter statement suggests that if you smoke, you will get
lung cancer, and that if you don't smoke, you won't.
The first statement is true while the second is not. There
are those who smoke, and will not get lung cancer, and there
are people who get lung cancer who never smoked (and did not
suffer from passive smoking either). The first statement may
be the conclusion of a study (written up with appropriate
percentages and statistics), printed in a learned periodical,
while the latter may be the headline in newspapers.
The second problem with understanding studies is the 'placebo
effect.' Going back to the study on Medicine A for the
treatment of high blood pressure, it would be quite normal
for some in the control group to have some improvement in
their blood pressure. The improvement in a few individuals
might even be as good as in the medicated group. However
statistically, when looking at the group as a whole, there
will not be as great an improvement for as many people, in
the control group, as compared to the medicated group. The
reverse is true for the medicated group — some in that
group will not be helped, either at all, or not as much as
others. How people respond to medicine is very variable.
This placebo effect is very difficult to explain. For some
reason, certain individuals improve without any medication,
because they believe that they are taking medication. In
fact, there will be some individuals who spontaneously
improve even if they take nothing (not even a placebo). Even
what the doctor knows seems to have an influence on the
results. That is why a "double blind" is used, where the
doctors also don't know whether they are giving the placebo
or the real medicine.
Studies most often establish statistical links, which is not
the same as proving 'cause and effect.' The debate about
smoking raged for so long because although it was easy to
prove a link between certain diseases and smoking, it was
harder to prove that one directly caused the other. As long
as that proof didn't exist, tobacco companies chose to
interpret the results in a way that was less damaging to
their business. The tobacco companies themselves did a lot of
the early research, and that raised many issues about how
totally reliable the results were.
Take an absurd example. There are certain illnesses that are
more prevalent among those of greater wealth, and other
illnesses that seem to affect poorer people more. Taken at
face value, we could say that it is the money itself, or lack
of it, that is making these people ill. That would be a
"cause and effect" argument. However, it is clear that we
cannot see a logical way of that being so. Therefore it must
be something in the lifestyle of rich people that makes them
prone to those illnesses, and likewise for the poor.
Poor people buy less fresh fruit and vegetables, visit the
doctor less often (they can't afford the time off work, and
are less educated about illnesses for which they should seek
help), have more unhealthy jobs (involving dust, heat,
vibration etc) and are more likely to smoke. Rich people are
more likely to eat heavy, rich (in both meanings of the
word!) meals, have high-stress jobs and have high blood
pressure. Finding the actual factor that causes any
particular illness is very difficult.
This is how they formed the link between crib death (SID
— Sudden Infant Death) and laying the child on its
tummy. This is not to say that they know why they are linked,
or even that it is 'cause and effect,' but only that it
showed up strongly in the statistics, and after the education
program was run to teach mothers to put their babies on their
back, the number of crib deaths fell dramatically. That is
not to say that all babies who sleep on their tummies are at
risk, or that babies don't also sometimes die when on their
back. It's a statistic.
In newspapers (less common in respectable secular papers),
studies are written up without any explanation of what type
of study it was, how many people were involved or what type
of results were achieved. No source is given, so the original
paper cannot be checked (that's assuming we could understand
it if we did read it). We just get a sort of 'headline' that
is often an interpretation rather than a result, and this is
very much influenced by the prejudices of the writer. We are
told "studies show . . . ." but are given no way to decide
how conclusive these studies are. This obscuration and
misinterpretation of results are common for studies involving
food supplements, alternative remedies and issues surrounding
childbirth.
For example, although many food supplements may have an
effect on a particular condition, given what we know about
the placebo effect, we need to know if the results from using
this supplement are more than the placebo affect. Was the
condition being treated clearly defined? Was the improvement
clearly measurable? Was a double blind used? The fact that
someone was helped is great for that person, but does not
constitute evidence that it will help someone else. Likewise,
even a medicine that has been proven to be effectual will
still be unhelpful for some people. The fact that you know
someone who wasn't helped doesn't mean it won't help you.
It is by the very nature of some of these things that it is
very difficult to run large-scale double blind trials, and
that certainly doesn't mean that they are not effective, just
that it is difficult to prove. One ends up taking the
attitude "it's worth a try," or "if it works for me then it
doesn't matter why, or whether it's just the placebo
effect."
Knowing the results of a study still does not always mean we
really understand what is going on. People who have their own
axe to grind often interpret the results for us. The media or
advertising industry often distorts or misinterprets the
results of studies, either from ignorance or because they
have an agenda of their own. It is important to try and be an
"informed consumer," or at least a sceptical one,
particularly if the underlying purpose of telling us these
"study results" is to sell us something.
An Interesting Study
Oxford University's physiology department conducted the
following study. First they assessed 100 children, of normal
ability, who were underachievers in reading and were also
suspected of being dyspraxic (which is a problem with motor
planning and coordination). Then they divided the children
into two groups, at random.
They gave half the children a supplement of Omega 3 fatty
acids, and half the children a placebo (a fake tablet).
Neither the children nor the researchers administering the
tablets knew who was in each group. 40% of the group given
the supplements made dramatic improvement, averaging nine
months progress in three months. The control group (given the
placebo) made the normal three months progress. After three
months, the control group were also switched to the
supplements and showed similar leaps in progress.
Although none of the children had been diagnosed as having
ADHD, about a third of the children had sufficient problems
to suggest that they had an attention disorder. During the
course of the trial, about half of these children made enough
progress for them not to be considered in that category
anymore. This is on par for results from Ritalin.
The underlying mechanism at work here seems to be the fact
that the brain and retina, at the back of the eye, are filled
with neurones, which are the wiring of the brain, and these
fatty acids play an important role in these neurones being
able to work and to transmit to each other.
In a separate, well-controlled, double blind placebo trial,
Omega 3 fatty acid supplements were given to prisoners who
had a history of violence while in prison. The number of
serious offences, including violence, in the group taking the
supplements dropped by 40%, but not at all in the control
group.
Why should we need to take a supplement when everything we
need should be available in our diet? Also, if children in
the same family eat the same diet, why don't they all have
the same problems? In answer to the first question, our diets
have changed over the years, and we leave out a lot of things
we used to eat. Omega 3 fatty acids are found in oily fish,
nuts, seeds and leafy vegetables (though fish is the richest
source) and these foods used to be a bigger part of our diet.
Also, they are very sensitive chemicals and break down easily
during food processing. In answer to the second question,
there are indications that some people inherit a
predisposition to being deficient, even when sufficient
quantities are available in the diet.
It is important to note that although 40% of the children
made these significant improvements, 60% didn't, and
therefore it is not going to be a panacea. Secondly, although
there were children with attention deficiency who improved
with supplements, and this was statistically the same
improvement seen with Ritalin, we don't know if the same
children who would benefit from Ritalin are those who would
benefit from supplements. It might be a different half.
We would have to study children on Ritalin, half of whom are
then taken off it and given these supplements instead, and
half given a placebo, and then see how they respond. Anyone
considering this switch should seek professional advice
first, as altering the treatment regime of a child can have
serious behavioral and emotional consequences. However, if
you have a child who has not yet started any treatment for
attention deficiency, or who has dyslexia, dyspraxia or
behavioral problems, then this may be "worth a try."
(Sources for the study on Omega 3 fatty acids available on
request ++44 191 477 5239.)