May 05, 2015

Nowadays it seems we hear about a new diet trend and read about the latest nutrition research findings on a daily basis. With all of this research being published and communicated through the media, everyone including health professionals may feel a bit confused as to how a study should be properly interpreted and what to take away from it. The strength of study findings and whether a “cause and effect” relationship exists depends on multiple factors, including a very basic part—the study design, and where it sits within the hierarchy of scientific evidence (below).

For studies that directly involve human subjects, there are generally two types of designs: experimental (marked in yellow) and observational (marked in purple), with the quality of evidence increasing as you move up the hierarchy.

Randomized controlled trials (RCT) provide the strongest evidence amongst all experimental studies and are the only study type to assess cause-and-effect.

Experimental studies (controlled trials) are designed to modify a specific human behavior and evaluate its effect over a certain period of time. Randomized controlled trials (RCT) provide the strongest evidence amongst all experimental studies and are the only study type to assess cause-and-effect. Randomly selecting study subjects into experimental and control groups minimizes the sources of bias (e.g. selection bias) and tries to ensure the groups are similar across many characteristics. The highly controlled environment allows researchers to answer a simple question: does the treatment (e.g. fish oil consumption) cause the outcome (e.g. reduced risk factors for heart disease) within the specific study environment?

Observational studies, including ecological, cross-sectional and prospective cohort studies may provide insight into observed associations but do not determine cause and effect relationships.

Observational studies, on the other hand, take a passive approach to recording various aspects of human behaviors that are related to the topic under investigation. Prospective cohort studies provide the strongest evidence among all observational studies, and follow a group of subjects which are of interest over an extended period of time and observe whether a certain behavior (e.g. fish consumption) results in, or protects against, a certain outcome (e.g. heart attack). Despite prospective cohorts being the highest quality among observational studies, what often gets lost in translation in media coverage is their list of limitations, and the fact that links derived from these observational studies are associations, not cause-and-effect relationships.

One major limitation of prospective cohort studies is that the behaviors (e.g. fish consumption) are usually measured years before the onset of the disease. Behaviors may change over time from the baseline value and then skew the measured outcome. However, without periodic monitoring, the size of deviation in the behavior cannot be quantified and then adjusted accordingly. In addition, as the environment is not being controlled, all observational studies are subject to the risk of “confounding” —i.e. the presence of other variables which are also related to the exposure and/or the outcome. These confounders may not have been recognized by the researchers when designing the study, or may not have been measured for practical reasons. For example, without assessing major dietary or lifestyle factors (e.g. consumption of fruits and vegetables, smoking, physical activity) that also potentially influence the outcome of interest (e.g. heart attack), it is difficult to attribute the outcome developed years later solely to the fish consumption measured at the very beginning of the study.

Systematic Reviews and Meta-Analyses consider the totality of available scientific evidence; limitations reflect the question asked and types of studies reviewed

At the very top of the hierarchy of scientific evidence are systematic reviews and meta-analyses. They do not directly study a group of human subjects but a collection of published studies that meet certain carefully crafted selection criteria, thus taking into consideration the totality of available scientific evidence rather than just a single study. Meta-analysis provides a quantitative measure to the association under investigation, helping us to understand the effect size and allowing easier comparison with other effects. Systematic reviews are still subject to a number of limitations. Most importantly, the quality of a systematic review is highly dependent on the type of design used in the studies and they are therefore only as good as the best available evidence summarized. For example, systematic reviews of RCTs offer much stronger evidence relative to systematic reviews of prospective cohort studies.

Perhaps next time you read a study and want to tweet about it or share with your circle of readers, clients, and friends…remember to read more than just the authors’ conclusion or media headline. Ask a few questions first: Is the study relevant to humans? What type of study is this? Is the conclusion supported by the data? And above all, where does it sit on the hierarchy of scientific evidence?