Why the Joy of Cooking is going after a Cornell researcher

By Brian Resnick

bottomless bowls” study, which demonstrated that people will mindlessly guzzle down soup as long as their bowls are automatically refilled, is one of many that have been called into question.’ data-portal-copyright=”Jason Koski” data-has-syndication-rights=”1″ data-focal-region=”x1:1083,y1:395,×2:1443,y2:755″ src=”https://cdn.vox-cdn.com/thumbor/IRta1qdQ7hspdtT-7ubj25TguRM=/250×0:2250×1500/400×300/cdn.vox-cdn.com/uploads/chorus_image/image/58853283/wansink_ap_NEW_credit_Jason_Koski_Cornell_News_Bureau.0.0.jpeg”>

Brian Wansink’s food-behavior studies constantly made headlines. Now, there are deep doubts about his work.

America’s most celebrated cookbook brand is calling out one of America’s most cited food scientists — the latest chapter in a bigger scandal that has been rocking social science.

The tiff erupted on Tuesday morning, when the Joy of Cooking on Twitter called Cornell’s Brian Wansink a “bad researcher,” and claimed he had unfairly implicated the cookbook brand in the obesity epidemic with a flawed scientific study in 2009.

(THREAD) Inspired by @stpehaniemlee ‘s new piece, we have decided to share this. We have the dubious honor of being a victim of @BrianWansink and Collin R. Payne’s early work. pic.twitter.com/s4NUd1YpqC

— Joy of Cooking (@TheJoyofCooking) February 27, 2018

In a fascinating Twitter thread that goes far deeper into the intricacies of scientific research than you’d probably expect from the Joy of Cooking — discussing unrepresentative sample sizes and cherry picking — the brand alleges serious misconduct by Wansink. “The rote repetition of his work needs to stop,” it said.

So what prompted this tirade? Over the past two years, a cadre of skeptical researchers and journalists, including BuzzFeed’s Stephanie Lee, have taken a close look at Wansink’s food psychology research unit, the Food and Brand Lab at Cornell, and have shown that unsavory data manipulation ran rampant there. The cookbook brand, inspired by Lee’s recent feature, says the 2009 study about them is yet another example of Wansink’s dubious scientific practices.

But this story is a lot bigger than any cookbook or any single researcher. It’s important because it helps shine a light on persistent problems in science that have existed in labs across the world, problems that science reformers are increasingly calling for action on. Here’s what you need to know.

Six of Wansink’s studies have been retracted and the findings in dozens more have been called into question

New Year's DietVox

In 2009, Wansink and a co-author published a study that went viral suggesting the Joy of Cooking cookbook (and others like it) were contributing to America’s growing waistline. It found that recipes in more recent editions of the tome — which has sold more than 18 million copies since 1936 — contain more calories and larger serving sizes compared to the Joy of Cooking’s earliest editions.

The study focused on 18 classic recipes that have appeared in the Joy of Cooking since 1936 and found that their average calorie density had increased by 35 percent per serving over the years. But in its tweetstorm, the cookbook maker accused Wansink of cherry-picking recipes, making up arbitrary portion sizes, and smearing their name under the guise of science.

Still, when the study appeared, it got a lot of media coverage and helped Wansink reinforce his larger research agenda focused on how the decisions we make about what we eat and live are very much shaped by environmental cues. See his famous “bottomless bowls” study, concluding that people will mindlessly guzzle down soup as long as their bowls are automatically refilled, or the “bad popcorn” study, which demonstrated that we’ll gobble up stale and unpalatable food when it’s presented to us in huge quantities.

The critical inquiry into his work started in 2016 when Wansink published a blog post in which he inadvertently admitted to encouraging his graduate students to engage in questionable research practices. Since then, scientists have been combing through his body of work and looking for errors, inconsistencies, and general fishiness. And they’ve uncovered dozens of head-scratchers. For instance, in one study Wansink misidentified the ages of participants, calling children ages 8 to 11 toddlers.

In sum, the collective efforts have led to a whole dossier of troublesome findings in Wansink’s work.

To date, six of his papers have been retracted from journals. But this debacle has drawn a lot of attention because Wansink was highly cited and his studies were catnip for reporters (including us here at Vox). Wansink also collected government grants, helped shape the marketing practices at food companies, and worked with the White House to influence food policy in this country.

For now, Wansink is still defending his research, according to BuzzFeed, and Cornell is investigating his work.

Wansink allegedly engaged in “p-hacking on steroids”

New Year's Diet snacksVox

Among the biggest problems in science that the Wansink debacle exemplifies is the publish-or-perish mentality.

To be more competitive for grants, scientists have to publish their research in respected scientific journals. For their work to be accepted by these journals, they need positive (i.e., statistically significant) results.

That puts pressure on labs like Wansink’s who do what’s known as p-hacking. The “p” stands for p-values, a measure of statistical significance. Typically, researchers hope their results yield a p-value of less than .05 — the cutoff beyond which they can call their results significant.

P-values are a bit complicated to explain (as we do here and here). But basically: They’re a tool to help researchers understand how rare their results are. If the results are super-rare, scientists can feel more confident their hypothesis is correct.

Here’s the thing: P-values of .05 aren’t that hard to find if you sort the data differently, or perform a huge number of analyses. In flipping coins, you’d think it’s rare to get 10 heads in a row. You might start to suspect the coin is weighted to favor heads. And that the result is statistically significant.

But what if you just got 10 heads in a row by chance (it can happen) and then suddenly decided you were done flipping coins. If you kept going, you’d stop believing the coin is …read more

Read more here: Why the Joy of Cooking is going after a Cornell researcher

About the author

Leave a comment