The advertising-obesity link is sure contentious. After analysing this hard-won data, the authors conclude: "For every 10% increase in food advertisements, the odds of being obese increased by 5%." That is, areas with more outdoor food ads have a higher proportion of obese people than ones with fewer ads
Christopher Chabris is a professor of psychology at Union College. Daniel Simons is a professor of psychology at the University of Illinois. They are the authors of The Invisible Gorilla: How Our Intuitions Deceive Us.
Obesity is a problem everywhere, with significant consequences for personal health and public spending. People weigh more than ever — but why? If we can find the causes of obesity, we can try to eliminate or counter them.
Unfortunately, finding causes is easier said than done, and causes we think we see can turn out to be illusions. Consider a recent study in the journal BMC Public Health under the anodyne title "Outdoor advertising, obesity, and soda consumption: a cross-sectional study."
A team of researchers walked every street in 228 census tracts around Los Angeles and New Orleans and recorded every outdoor ad they saw. Another group surveyed 2,881 residents of the same census tracts by telephone, paying them to report their height, weight and other information.
After analysing this hard-won data, the authors conclude: "For every 10 per cent increase in food advertisements, the odds of being obese increased by 5 per cent." That is, areas with more outdoor food ads have a higher proportion of obese people than ones with fewer ads.
Referring to their advertising-obesity link, the authors later write, "If the above associations are confirmed by additional research, policy approaches may be important to reduce the amount of food advertising in urban areas." They discuss bans, warning labels and a tax on obesigenic – that is, obesity-generating — advertising in the US.
We first encountered this study in a news release from one of the authors' academic institutions. Public relations being what it is, such documents often exaggerate the importance and minimize the limitations of the underlying research. In this case, however, the researchers themselves went out on a limb that their data did not fully support.
The problem is that their policy recommendations rest on a crucial but unjustified assumption: that any link between obesity and advertising occurs because more advertising causes higher rates of obesity. But the study at hand showed only an association — people living in areas with more food ads were more likely to be obese than people living in areas with fewer food ads. To be fair, the researchers correctly note that additional steps would be needed to prove that food ads cause obesity. But until those steps are taken, talk of restricting ads is premature.
In fact, it is easy to imagine how the causation could run the opposite way (something the article did not mention): If food vendors believe obese people are more likely than non-obese people to buy their products, they will place more ads in areas where obese people already live.
Suppose we counted ads for fitness-oriented products like bicycles and bottled water, and found more of those ads in places with less obesity. Would it then be wise anti-obesity policy to subsidise such ads? Or would the smarter conclusion be that the fitness companies suspect that the obese are less likely than the fit to buy their products?
This is not an arcane statistical point or a mere technical criticism of one academic article. Too often, relationships that are far from being understood are assumed to reflect a particular, strong causal connection, leading to no end of regulatory mistakes. Time, effort and money wasted on unsupported policies decrease the resources available for finding and implementing real solutions.
When we seek to base policy on evidence, we must remember that not all 'evidence' is created equal. Taken at face value, the study on ads and obesity provides some indication that the two are linked, but no evidence that food ads cause obesity. The fact that the causal conclusion may coincide with a moral belief — that it is wrong to tempt people who overeat by showing them ads for food — does not make it valid.
"Confirming" the association with more studies will not change this logic. Indeed, it would add just as much evidence for the counter-theory that obesity in neighbourhoods causes food ads to be placed there. Put another way, no matter how often we see that wealthy people live near luxury stores, we cannot conclude that Neiman Marcus is "wealthogenic."
Still, we need not throw up our hands. There are ways to legitimately test whether outdoor ads cause obesity. We could start by seeking cases in which food advertising increased or decreased for reasons that had nothing to do with trends in obesity. Researchers could examine changes in zoning laws regarding billboards, for example, and compare the obesity trends in areas where the changes led to more or less advertising.
The gold standard for inferring causation in social science, as in medicine, is the randomised controlled trial, in which people or places are randomly assigned to receive different treatments. In this case, the "treatment" would be the amount of outdoor food ads in an area. But advertisers are unlikely to agree to randomly distribute their signs, nor would people consent to live in a randomly chosen place.
A superior method would be to restrict food ads temporarily in a randomly selected set of areas, and then compare the prevalence over time of obesity in those areas and the nonrestricted ones. Carefully executed, such an experiment would be a much better test of the hypothesis of obesigenic advertising.
That the experiment would be hard to do does not entitle us to act as though we already know its results. Pretending we have evidence for a cause when all we really have is an association can lead to erroneous and even harmful policies, and ultimately the deprecation of science as a guide to wise government.
Christopher Chabris and Daniel Simons/ The New York Times News Service