I’m pasting this in from a newsletter from Nina Teicholz so I can share it. Taubes is the best food writer out there and he’s a sole skepticic when it comes to sacred cows of eating.
Introductory Note from Nina:
I’m delighted to share this post with you by Gary Taubes, a science journalist who for most of you needs no introduction. Gary single-handedly revived the carbohydrate-insulin hypothesis in his seminal book, Good Calories, Bad Calories (2007), although he had already come to fame with his 2002 New York Times Magazine cover story, “What if it’s All Been a Big Fat Lie?” No single person, be it a journalist, researcher, or other expert, has contributed more to shifting the paradigm on dietary fat and carbohydrate than Gary Taubes. It’s an honor to have him contributing here.
Introductory Note from Gary:
I have been hoping to join Substack for a while now – either as a guest or co- columnist on Nina’s site, or on my own. I’ve been looking for a timely subject that worked well as a first column. In short, it would be suitably important and interesting enough that I could detach myself from my day job, researching and writing books, and devote the attention necessary to write thoughtfully on the issue. The recent negative results from the MIND Diet Trial, as published on July 18th in the New England Journal of Medicine, seemingly demanded just such an extended discussion and so that discussion will follow this introduction.
My contributions to the newsletter will focus on the subject that has always obsessed me: the extraordinary difficulty of generating reliable knowledge in any scientific endeavor and the ridiculous ease with which researchers (indeed, all of us) can be misled. If there’s one subject I have studied as much or more than anyone alive, it is that of pathological science, a term coined in a 1953 lecture by the Nobel Laureate chemist Irving Langmuir to describe “the science of things that aren’t so.” Richard Feynman famously referred to pathological science as Cargo Cult Science, in his 1974 commencement address at Caltech, and both should be required reading for anyone interested in science, whether as an observer or professional. My first two books were on high energy physicists (Nobel Dreams, 1987) and then chemists and physicists (Bad Science, 1993) who discovered nonexistent phenomena – supersymmetry particles, in the first case, cold fusion, in the second — and so demonstrated quite how easy pathological science can be. They were my learning experiences.
Since the early 1990s, I have focused on the manifestation of pathological science in the nexus of research on nutrition, obesity, chronic disease and public health. That’s where my expertise now lies (for those who think I have any) and where I will focus my writing, although I will not do so exclusively. Because the quality of the science in these disciplines is often so disappointing – a product, I believe, of both the training of the researchers and the expense and difficulty of rigorously testing hypotheses — I have played with the idea of calling my newsletter or my contribution to this one “Let’s Pretend This is a Science…” I may still. I hope also to write many of the entries in the form of letters to researchers and/or editors, asking questions that seem important in the light of the work being discussed. I’m hoping that they will at least occasionally see fit to thoughtfully respond, and so we can publish a dialogue rather than a monologue.
Because I consider myself, still, a journalist, and the coverage of these subjects by my colleagues in the media inevitably influences how we perceive them – both the nutrition-related subjects themselves and the scientific research used to draw conclusions and create consensus — I will also write often about that media coverage.
I will also apologize in advance for any errors. I will assuredly make them, if for no other reason than I am unaccustomed to writing and publishing quickly. (Regrettably, I make them even when I’m not.) I will endeavor to correct any as necessary.
The Importance of Clinical Trials
Such a trial is critically important. It can shed light not only on the role of nutrition in dementia and neurodegeneration, but also on the quality of nutrition science, in general – i.e., the greater scheme of things. To be precise, a randomized, controlled clinical trial (RCT) serves three functions:
1. It tests the safety and efficacy of the intervention being tested: are there meaningful benefits and do they outweigh the risks? Should we recommend or prescribe this intervention to patients with the reasonable expectation that it will do significantly more good than harm?
2. It tests the hypothesis underlying the intervention. If we are advocating for a dietary intervention based on a particular belief/hypothesis – in this case, we can call it the food-as-medicine hypothesis, as we’ll discuss — then an RCT of the diet tests that hypothesis, too, even if only indirectly.
3. It tests the research methodology and techniques/equipment that were used to generate the hypothesis. If we believe a particular hypothesis based on our interpretation of a certain type of evidence, then the experimental trial is also a test of the interpretation of that evidence and, perhaps, the validity of that evidence itself, and so the methods used to obtain it. Can we trust that evidence, in effect, to imply what we think it implies? And, if we can’t, why not?
The MIND Diet Trial
Now, the MIND diet trial: the researchers — from Rush Medical College in Chicago and from Harvard — randomized 604 subjects to receive intensive counseling to eat either a MIND diet with “mild caloric restriction” or a control diet, also with “mild caloric restriction.” The subjects were older adults, overweight if not obese, and with a family history of dementia putting them at high risk for this disease. They were all folks who ate poorly to begin with, “a suboptimal diet.” Hence, the deck could be stacked, legitimately, such that the subjects might be counted on to improve their diets as counseled and that meaningful results might be obtained in a reasonable amount of time. In this case, three years.
The diet itself, as the researchers described it in the New England Journal of Medicine (NEJM), is composed primarily of “foods and nutrients that have been putatively associated with brain health.” The actual constituents of the diet can be found on the Harvard School of Public Health website: 3 servings daily of whole grains, plus green leafy vegetables, nuts, beans, berries, occasional poultry and fish, olive oil. The diet explicitly restricts pastries and sweets, red meat (less than four servings a week), cheese and fried foods (lumped weirdly, if not inexplicably together) to less than one serving a week, and less than a tablespoon a day of butter and/or stick margarine. (I assume the diet restricts nonstick margarine, as well, but….)
The Harvard page says nothing about sugar or added sugar, but the MIND diet is almost assuredly a low-sugar (sucrose/High-fructose corn syrup) diet. It allows a maximum of four servings weekly of “sweets and pastries.” We can assume if it allowed sodas, energy drinks or even fruit juices, it would say as much. The Harvard site elsewhere counsels against drinking them. The sample meal plan includes not even a teaspoon of sugar to sweeten the breakfast of 1 cup of cooked steel-cut oats (with almond slivers, blueberries, and a “sprinkle of cinnamon”). The closest the meal plan gets to including sugar/sucrose is whatever’s found naturally in the blueberries and a single medium orange as a snack.
So we can understand why this diet would be so highly touted. The Harvard site summarizes what it calls the “research so far:”
The MIND diet contains foods rich in certain vitamins, carotenoids, and flavonoids that are believed to protect the brain by reducing oxidative stress and inflammation. Researchers found a 53% lower rate of Alzheimer’s disease for those with the highest MIND scores. Even those participants who had moderate MIND scores showed a 35% lower rate compared with those with the lowest MIND scores. The results didn’t change even after adjusting for factors associated with dementia including healthy lifestyle behaviors, cardiovascular-related conditions (e.g., high blood pressure, stroke, diabetes), depression, and obesity, supporting the conclusion that the MIND diet was associated with the preservation of cognitive function.
U.S. News and World Report (USNWR), the dominant media authority in the diet space, considers this evidence sufficiently reliable that it ranks the MIND diet as the fourth best diet overall. Its component parts, the Mediterranean and DASH diets, were ranked as one and two respectively. (The Flexitarian diet, quite similar in its concepts, snuck in at number three.) USNWR recognizes that the MIND diet is only “potentially” capable of delaying neurodegeneration, but its place in the rankings seems otherwise secure.
The “research so far,” however, is almost exclusively observational. It comes from epidemiological surveys, not experiments. In these surveys, individuals eating in a MIND-like way (MINDfully?) tend to have a lower risk of Alzheimer’s disease and dementia. So eating MINDfully is associated with a lower risk of neurodegeneration. We don’t know if the diet is responsible for the lower risk: i.e., the cause. This is the issue discussed both ubiquitously these days, and yet somehow not enough: that associations do not imply causality.
The epidemiologists and the other researchers who do these observational studies will readily acknowledge, as they’ll often say or write, that associations do not prove causality, but they certainly believe that somehow these associations imply it, that they make a causal link between the two associated variables more likely than not. How can I be certain? If they didn’t believe associations implied causality, they would not do these studies. They certainly would not tell us how to eat based on the associations observed in these studies.
This speaks to the third function served by the randomized controlled trials: They test whether or not the associations that are observed in these epidemiologic studies imply what the epidemiologists think they do. In short, do these researchers understand their equipment (the epidemiological surveys themselves), and is this science capable of discerning what they think it can? Is this a functional science capable of establishing reliable knowledge, or a pathological one? This kind of observational evidence used to be referred to as hypothesis-generating evidence, because that’s the best it can do: suggest a causal hypothesis from evidence of association. The randomized controlled trials (experiments) are done to test the hypothesis, and so to test the quality and reliability of the evidence from which the hypothesis was generated.
Who Wants to Report the Bad News?
The news here, of course, is that the trial failed to confirm the hypothesis that eating MINDfully delays neurodegeneration. It did not refute it, an important point, because it’s impossible to prove a negative in any endeavor outside of mathematics. But it did not see what the researchers had hoped to see. Despite evidence suggesting that the subjects randomized to eat MINDfully made considerable effort to eat as counseled, they fared no better than the subjects in the control arm, not on the numerous cognitive assessment tests or the brain imaging. Whatever this MIND diet might do, the test generated no evidence that it delays or prevents neurodegeneration.
The results should have sparked widespread discussion and analysis. That’s one reason why a journal like the New England Journal of Medicine will publish them. So far, however, the news is in the absence of news. Newspapers like the New York Times, which have published multiple articles on the hypothesis-generating observational studies of the MIND diet – here, for instance, or here or here – chose not to cover the actual trial results, despite the NEJM publication. If Google can be trusted here, only CNN, the Globe and Mail, the Blue Mountain Eagle (Grant County, Oregon) and WKRC in Cincinnati covered the MIND results in the English-language general media. Medscape, Medical Xpress and Live Science also covered it.
So let’s now have that discussion. What should we make of the results? Despite the temptation to describe the absence of any benefit as “surprising”, as CNN and The Globe and Mail did, it clearly should not have been. The researchers themselves noted that previous reviews of the clinical trial evidence – here and here — had provided reason enough to expect the worst: these meta-analyses “did not affirm The beneficial effects of diet on cognition that were observed in epidemiologic trials.” Now this trial did not either.
Obvious question #1: why not?
This gets to the functions served by these RCTs. A likely answer is that this MIND intervention does not delay neurodegeneration. End of the story. But maybe it does, and the trial simply failed to test that hypothesis properly. The history of science tells us that researchers and experiments often, perhaps more often than not, get the wrong answer. (Quoting Francis Crick of DNA fame: “Any theory that can account for all of the facts is wrong, because some of the facts are always wrong.”) Experimental science is difficult. That’s why independent replication of results is a requirement in a functional science. One experiment or even a few experiments getting the same answer can always be wrong. Maybe that’s what happened here.
Those researchers/nutritionists who have been proponents of MIND-like diets were quoted in the CNN story explaining the likely ways that the trial might have erred. Perhaps it didn’t run long enough, as Harvard’s Walter Willett, a pioneer of nutritional epidemiology, suggested. Maybe three years was simply insufficient to see meaningful results. The MIND researchers raised this possibility. It could be true, but the size of the associations observed in the epidemiological studies, if caused by the MIND diet – 50 percent reductions in Alzheimer’s disease incidence – suggest otherwise. It’s also possible that simply the fact of being in a clinical trial and receiving dietary counseling caused the control group also to improve their diets considerably. The fact that both groups lost 10-11 pounds suggests they did. But then that raises the question of how the control group might have improved their diets, and that gets to the hypothesis underlying the MIND diet itself. This goes back to Function number two served by such an RCT, as I describe above.
Perhaps both groups lost an equivalent amount of weight due to what they both did but was a factor independent of the MIND diet prescription itself. Perhaps a third alternative would have been still better, or perhaps far better if the researchers indeed knew correctly what it was about how we eat that either accelerates or delays neurodegeneration.
For instance, and here I’m speculating but (I hope) on firm ground: what if the subjects on the control diet, counseled to engage in “mild calorie” restriction, did what subjects in these nutrition trials often do, and what most of us try to do when generally trying to “eat healthy”?
This is what ketogenic and healthy vegan or vegetarian diets or even healthy low fat diets (per Stanford nutritionist Christopher Gardner’s famous DIETFITS trial) inevitably share. Even if the diet is described ostensibly as calorie-restricted, or low-fat, or restricts red and processed meat and animal products, as in a vegetarian or vegan diet, the individuals following these dietary patterns also improve the quality of the carbohydrates they consume. Quoting from my book, The Case for Keto: “They [the subjects of these diet trials] eat carbohydrates that are only minimally processed and that contain considerable fiber, such that both blood sugar [sucrose] and insulin responses are muted. The technical term would be that they eat carbohydrates with a lower glycemic index…. They avoid eating sugar or drinking sugary beverages or carb-rich beverages like beer and milk. They avoid desserts after meals and snack bars between meals. They’re doing it to avoid fat [or calories], but they’re avoiding sugar and refined grains in the process.”
Testing the MIND diet hypothesis: Do foods serve as medicine to delay neurodegeneration?
This confounding of a dietary intervention with the natural inclination to restrict sugar and sweets when trying to eat “healthy” speaks to the underlying hypothesis of MINDful eating. As mentioned, this is the Food as Medicine hypothesis: The MIND diet assumes that particular foods are protective against neurodegeneration – they “protect the brain by reducing oxidative stress and inflammation,” as the Harvard website explains — because they associate in the epidemiological surveys with a lower risk of dementia. These include berries and nuts, green vegetables (kale!), and perhaps even whole grains and olive oil. They all work like medicine to reduce oxidative stress and inflammation and so reduce our risk. The specific restriction of other foods (red meat, cheese, fried foods, sweets and pastries) implies that the researchers think these are harmful – chronic, but not acute toxins, as Robert Lustig of U.C. San Francisco might say, perhaps causing oxidative stress and inflammation.
When the results of a trial are negative, as these were, we have to question all of these ideas about diet. Perhaps no foods work like medicine to prevent cognitive decline – we can’t add blueberries, whole grains, kale and fish to an otherwise unhealthy diet and expect that to help. Perhaps foods are either benign/harmless when it comes to chronic diseases, or they’re not. If they’re not, then they cause neurodegeneration, and those foods have to be avoided. This is another, an alternative hypothesis. But the question is: which foods?
Nutritionists and dietitians tend not to be fans of this kind of toxic food hypothesis. They do not like labeling foods either “good” or “bad,” because they believe it fosters eating disorders, among other explanations that I find not entirely sensible. With the MIND diet trial failing to demonstrate any benefit to the brain based on its Food-as-Medicine thinking, then we should consider the possibility that we need a different hypothesis. Perhaps carbohydrate-rich foods and particularly liquids – sugary beverages, beer – maybe alcohol or seed oils are the neurodegenerative factors, and the diet that restricts these “bad” foods will have the greatest effect on neurodegeneration and dementia. A diet that explicitly restricts these foods might be protective, because it would be absent the foods that accelerate or perhaps even cause neurodegeneration. Of course, that hypothesis, too, requires rigorous testing.
The Harvard website says that Johns Hopkins researchers are doing a trial comparing the MIND diet to a modified Atkins diet, which would serve as an initial test of this alternative hypothesis. A modified Atkins diet is absent virtually all carbohydrate-rich foods and beverages, absent all sucrose (except in berries) and high fructose syrups, and the diet is high in fat and even saturated fat. It can be rich in animal foods. Comparing it to the MIND diet would be a way of testing the value of whole grains and legumes (beans) in the MIND diet, and perhaps the value of restricting foods like cheese and red meat that are fat rich. Unfortunately, clinicaltrials.gov reveals that the trial was canceled due to “lack of funding.”
Perhaps now, with the negative result from the current MIND trial, that funding will be found.
Testing the tenets of nutritional epidemiology: Are associations likely to imply causality?
Finally, what about the third function of RCTs: testing the interpretation of the evidence used to generate the hypothesis, in this case the associations from nutritional epidemiology and the likelihood that they imply causality?
Much of what we believe about nutrition emerges from these observational studies. Epidemiologists identify and survey large “cohorts” of individuals – nurses, for instance, in Harvard’s seminal Nurses Health Study. When the Harvard website refers to “participants” in these studies who had high or moderate MIND scores and manifested a lower risk of dementia, these were not the subjects of randomized controlled trials but the members of these observational cohorts. They were not counseled to change their diet or lifestyle, as they would be in a randomized, controlled trial, or provided with different diets to eat. They were not housed in facilities and fed all their meals. They were merely surveyed by researchers about the nature of their typical diet. They may or may not have reported on that diet accurately, and they may change their diet and eat differently once they’ve been recruited as a participant in one of these cohort studies – another critical issue – but regardless, they are not subject to any experimental intervention.
These observational surveys establish the initial health of the cohort, what they eat (or admit or remember to eating) and whatever aspects of their lives and lifestyles the researchers deem worthy of measuring. In some studies, the participants are surveyed repeatedly and given physical examinations repeatedly to better establish any associations reported between diet/lifestyle and health. Again, the implicit rationale for these studies is that the associations identified between diet/lifestyle and health are likely enough to be causal that they can be treated as such. This is the thinking/bias that has been embraced by the Harvard School of Public Health and U.S. News and World Report and a world of nutrition authorities to tell us how to eat. We should eat in the same way that the healthy people in these cohorts eat, based on the assumption – i.e., the hypothesis — that people who eat this way tend to be, well, healthier than those who don’t.
Those of us skeptical of this thinking are motivated largely by the abysmal track record of these nutritional hypotheses when they are tested in randomized, controlled trials. An optimistic assessment would be they get the “right” answer, or at least the same answer, maybe half the time. A less optimistic assessment would compare their track record to a stopped clock. I discussed this problem back in a 2007 cover article in The New York Times Magazine, and the track record hasn’t gotten any better. The MIND trial is the latest example.
One way to think about the problems with observational epidemiology and, indeed, the challenge in all sciences – whether the search for new subatomic particles or the awareness that food X causes or protects against disease Y — is in terms of signal-to-noise ratios. The signal is the evidence that supports the conclusion that something new has been discovered. The noise is the background above which this signal has to stand out: it is all the evidence and all the reasons to believe that no discovery has been made.
The greater the signal, the smaller the noise or background, the more believable the discovery. In this sense, the scientific challenge is not just detecting and establishing the validity of the signal, but establishing unambiguously that the signal is too prominent to be explained by the background noise. This background isn’t just the random variations as assessed in concepts like p-values and sigmas, but all the ways that the experiment or detection equipment (the cohorts, surveys, physical examinations and medical records in epidemiological research) may have conspired to fool the researcher. Epidemiologists refer to these as biases, and confounders and they represent all the ways that a signal, an association, might have been generated and even repeatedly generated in numerous surveys that have nothing to do with a causal link between the two associated variables.
Doing a proper background analysis may be the hardest, most time-consuming aspect of any research. One line I may have quoted in every book I’ve written is Richard Feynman’s, from his Cargo Cult Science commencement address, that “the first principle” of science “is you must not fool yourself — and you are the easiest person to fool.” The background analysis is a meticulous accounting of (ideally) all the ways you can possibly fool yourself, and all the ways that your apparatus, methodology and assays can conspire to fool you as well.
I don’t believe the epidemiologist exists who can think of all of the possible biases and confounders in his or her surveys and statistical methodologies, let alone reliably assess the impact of all of them. Borrowing from Donald Rumsfeld, these biases and confounders encompass both the known unknowns and the unknown unknowns. Hence, what’s needed is a signal so blindingly obvious that we can assume that whatever confounders that the researchers failed to imagine or properly account for could still not explain it. The 10-20-fold increase in lung cancer risk in cigarette smokers vs. non-smokers is the iconic example of such an association, one that’s believably causal because we simply cannot imagine non-causal explanations.
Now the critical question: is the 50 percent reduction in risk of dementia or Alzheimer’s disease associated with eating very MINDfully that kind of blindingly obvious signal? Can it be explained without having to assume that the MIND diet itself prevents against neurodegeneration?
The answer is yes. In this kind of setting a 50 percent reduction in risk is what you’d expect from a simple alternative hypothesis: that people who eat very MINDfully are very health conscious (they don’t eat this way by accident or coincidence), and they are of a higher socioeconomic status (they can afford to make a priority of eating this way and of being health conscious) and all the health-related perks that go with higher socio-economic status, including higher education and better doctors. I discussed this problem, encompassed in the concept known as healthy-user bias, at length in my 2007 New York Times Magazine cover article on the problems with observational epidemiology. I won’t go into it further here. The more of these articles I write, the more I’ll assuredly return to it. The simple fact is that the epidemiologists will say, as the Harvard website does, that they “control” for these factors, for “healthy lifestyle behaviors,” but they have made precious little effort to convince skeptics that they even can do such a thing, let alone that they have. In a field like physics, what I spent my journalistic salad days covering, researchers consider no endeavor more important than understanding how their apparatus and their methodology can fool them, and convincing others that their understanding of these factors and so their confidence in their results is justified. Those researchers who are not obsessed with understanding these confounding factors – the background to their signal — inevitably discover non-existent phenomena and live to regret it. This is at the heart of all pathological science.
The recent MIND results are yet more evidence that the epidemiologic or observational basis of our nutritional belief system is a house of cards. If the epidemiologists were serious about their science, they would recognize the negative results of the MIND trial as a referendum on the basic reliability of their science, and establish a plan to rigorously and objectively examine the value of their product. Who knows? Maybe I’m wrong about all this? But I’ve never seen epidemiologists produce the kind of concerted, rigorous analysis that would convince me or any other critic otherwise. As a rule, these very influential researchers seem to believe that they are sufficiently unlikely to be fooling themselves that they don’t have to worry about it. They believe that the subject of their research is so critically important, and the experimental tests so fraught with risk and expense, that we should believe their hypotheses, even when they fail the experimental tests, because this is still the best they can do.
The history of science, not to mention the latest MIND diet trial, suggests otherwise. Nutrition science has a problem, and its not only that the conventional thinking in the field so frequently fails experimental tests, but the researcher’s continued faith in the methodology that generates this thinking.
A brief note about reporting bias, by the media, not (necessarily) the researchers
Before we go, I want to talk about the problem of reporting bias. This, too, will come up often as I continue to write on this subject. I can understand why newspaper editors and reporters would prefer not to cover negative results, as in the MIND trial, or cover them as CNN and the Globe and Mail did, by reporting what the study found and then explaining all the reasons (or quoting “experts” explaining all the reasons) why the trial/ experiment might have gotten the wrong answer and why the MIND diet should be consumed anyway. To report the results of the latest trial without bias is to suggest to readers that they cannot trust what nutritionists and dietitians and even the reporters and their media outlets in the past have been saying. In a world in which two thirds of the (American) public live with overweight and/or obesity, in which roughly one in ten lives with diabetes, do we really want to undermine the faith and credibility of the authorities and the media on such an important subject? Surely, the odds are good enough that something like the MIND diet is far healthier than how Americans typically eat, and don’t we want to encourage at least healthier eating?
Yes, but…. By not reporting on the negative results when they’re published, reporters in the future and anyone who covers these subjects are likely to be unaware of them. The next time the MIND diet appears in the journals – most likely from yet another observational study — the New York Times or Washington Post reporter who searches the archives for what’s been reported in the past will only find the studies in which the MIND diet lived up to expectations, not those in which it didn’t. That reporter will get a biased view and that biased view will play into that reporter’s article as well.
I saw this problem and discussed it in nutrition with the very first article I published: a lengthy (and award-winning) exposé in the journal Science on the research purportedly linking salt consumption to hypertension. I quoted one of the leading authorities in the field discussing “the totality of the data” which was the body of evidence that appeared to “support a particular conviction definitively, unless one is aware of the other totality of data that doesn’t.” In my first book on nutrition research, Good Calories, Bad Calories (GCBC), I discussed how these nutrition researchers and reporters had developed the habit of rejecting any negative evidence as it was produced, always finding a reason – as with the MIND trial – to render it meaningless.
As I said in GCBC you can seemingly prove anything if you insist on a do-over every time your experiment finds the opposite. Nutrition researchers have been conditioned to engage in this kind of selection bias for decades, if not longer, and the reporters who cover the field do the same.
A key to understanding reality, though, is in paying more attention to the evidence that doesn’t agree with your preconceptions – i.e., the background noise – than the evidence that does. When the journalists don’t report these results, the bias inevitably works to keep the idea alive and to inhibit the necessary discussion of what else might be happening – what the true link, if any, between diet an dementia might be – and why the epidemiologic surveys don’t pick it up. The more important the issue – climate change and Covid are the two most obvious prominent examples – the more the bias in reporting seems to manifest itself.
The failure of the MIND diet to prevent or delay neurodegeneration in this latest trial tells us something extremely valuable about the background to the conventional wisdom in nutrition, obesity and chronic disease. It speaks directly to the Food as Medicine hypothesis. Let’s see if it provokes this kind of discussion in the weeks to come in the medical journals.
If nothing else, proponents of the MIND diet should seriously consider a change of acronym. How about something that plays up the hypothetical more, allowing them to back away from their presumption of neurodegenerative delay more easily should the diet fail future tests as well. They could just call it the M-D (Mediterranean-Dash) diet, and leave behind the implication of beneficial effects on neurodegeneration. I’d bet against changes being made, though, as MIND is simply too catchy and already ubiquitous. I’m hoping I’m wrong.