By Myrto Angela Ashe, MD, MPH

It has always been true, as Jonathan Swift asserted a few centuries ago, that “falsehood flies, and truth comes limping after it.”

So how do we know what we know?

Internet algorithms only worsen the problem of flying falsehoods.

How much due diligence should we do before mentioning a finding, quoting a study, forming an impression about a topic, coming to a conclusion about, or recommending a treatment?

The basic problem is that our experience of the world is skewed by our cognitive bias. While we wish we could assess information in a timely manner, so as to make good decisions when faced with a crisis, this is very difficult. I share my framework for assessing information and argue that all of us with science knowledge and expertise share a responsibility to avoid spreading misinformation.

Cognitive Bias

We know for a fact that humans take in information selectively, in ways that confirm their previous biases. A recent Hidden Brain podcast went into detail explaining findings from research on memory, showing that our previous beliefs and experience drastically change how we remember things (

In a recent book, Why We Are Polarized, author Ezra Klein explains that not only do we have a difficult time adopting new points of view, but that people with more “intelligence” (determined by their responses to math questions, in one specific study) spend more time and energy justifying previously held beliefs than people with less “intelligence.”

This is well-understood and termed “cognitive bias.” There are multiple forms of cognitive bias. Each person’s “reality” is far from complete. There’s a “cognitive bias codex” and it looks like this:


While this graphic is breathtaking, and also intuitively recognizable as correct.

So, I believe we can all agree that we are at the mercy of our cognitive biases, and that they skew our beliefs and our ability to come to agreements, even our understanding of what is real and what isn’t. How should we proceed when we want to come to a conclusion?

What to base our advice on

As a medical doctor, I am frequently asked for advice. I have to base my advice on something, and I have many choices on how I can proceed.

I could just memorize and repeat the guidance from various official organizations. Over time, however, this guidance might change, and I would like to get ahead of the lag time between what is available in research studies, and what has been adopted by official organizations.

Take for example, the case of lead. There was a time we were advised that a safe lead level was anything above 25 mcg/dL of blood. Then the level was brought down to 10 mcg/dL. Then there was a statement that there is no safe lead level for kids. In 2019, there was a study showing that the risk of cardiovascular disease for adults rises linearly with rising lead level, starting from zero. So it’s not safe at any level for anyone. And any reassurance issued 20 or 30 years ago was wrong. Most upsetting, there is a  long lag time (some say 14 years, some say 30) between the availability of research data pointing to a certain conclusion and the adoption of this conclusion by official organizations.


As we are finding out in the COVID-19 pandemic, it is turning out to be maddeningly difficult to give excellent advice in real time. SARS-COV2 is a problematic virus that causes mild disease in a large percentage of people (albeit with poorly understood sequelae), significant disease in a significant percentage, and severe disease, disability, and death in a small percentage. However this is a small percentage of an extremely large number, so that we have lost 100s of thousands of Americans in less than 2 years.

The virus spreads fast but the damage spreads slowly: each person with COVID infects a few others, most of whom will be asymptomatic or mildly affected. At this point, it seems to many observers that there is little to worry about.

Then it spreads further, and by the time anyone notices that hospitals are getting busy, hundreds and soon thousands of people are dying every day.

Vaccines were found to be effective in placebo-controlled, randomized clinical trials, but the study subjects were carefully chosen to be fairly healthy, diminishing the public’s confidence. The follow up time was short, raising concerns about whether the benefits of vaccines would persist over time. All this is justifiable in the setting of an emergency but how do we move forward with our advice?

Because of its heterogeneous presentation, COVID-19 looks mild enough to most people (except hospital personnel). If you treat people with a medication, the first 100 or 200 patients will do really well. They may have done well if you hadn’t treated them but you will need research to find out, and in the meantime, what should you do?

The whole thing gets hopelessly politicized because in under-resourced countries, the population needs a sign of hope, and a sign of confidence in their government, or they may revolt as deaths mount. The strife in itself may directly or indirectly kill more people than COVID-19. This could motivate certain researchers to invent or falsify research information, or design and/or report research in a biased manner.

Also, because COVID-19 is a disease, and disease evokes fear of losing control, even in well-resourced countries you will get a substantial proportion of people who are inclined to keep believing that their natural state of health, or their religion, will protect them better than a treatment or medication over which they have little control, coming from a system they distrust.

My framework

I decided to write this blog because of the daily onslaught of “evidence” that we should support one treatment or intervention over another. These mostly come from well-meaning people and it is important that the data be available, but since we all have cognitive biases, what standards should we hold ourselves to before linking, re-tweeting, re-posting, forming conclusions, and giving advice to other people?

As a medical doctor, I feel deeply responsible to give correct advice, but burdened with my cognitive biases, how can I do that?

In general, I have adopted 4 basic guidelines:

  1. I will look at the results of randomized controlled trials. By this I mean studies involving double-blind placebo-control, where a group of people (whose medical symptoms are relevant to what I am hoping to treat) are allocated randomly to take either the treatment under consideration, or a placebo. And neither the experimenters, nor the study subjects know who got the placebo and who got the treatment (double-blind).
  2. I will consider the basic science, and what it tells me about what is biologically plausible. As Carl Sagan said, “it pays to keep an open mind, but not so open that your brains fall out.” This is easier said than done. As we learn more about basic science, we may revise what is biologically plausible.
  3. I will consider ancestral relevance. This is specifically when it comes to considerations of health vs. ill health. I believe that our genetic setup is almost identical to that of our ancestors, and that therefore, interventions that were available to them (fasting, sleep, exercise, community, unprocessed foods) are likely to be of highest benefit to us. That is in contrast to interventions that were not available to our ancestors, such as medications and other “high-tech” interventions.
  4. I will consider the experience of my peers, and that of my patients, but mostly concerning very low risk interventions. Such experience is likely to be confounded by the placebo effect. If I guess something works, and it works for my patient, then I will be further convinced, even though the chance of a placebo response was at least 30%. A placebo response is not a weak or fleeting response, but a manifestation of the body healing itself using the nervous system and immune system.

I admit that the above pose dilemmas. In particular, randomized controlled trials have not necessarily been done for what I want information on. Second, many are poorly conducted in ways that invalidate their results. Some are even made up from scratch, listing data that is completely invented. This can be detected if you know what to look for, but who has the time, equipment, expertise, or inclination?

How can you suspect a study is fraudulent? For example, if you look at the tables in the Results section of the preprint or publication, and the study is about people, the percentages need to make sense. Say this is a study with 20 people in it. They confidently state 33% were male. That’s impossible. The percentage has to be 30% or 35%, either 6 subjects were male, or 7 subjects were male.

There are people who specialize in this sort of forensic analysis. But the problems can also be in the study design, or in the way the study was carried out after the patients were initially randomized to treatment vs. placebo. Or the treatment might have been so noticeable (causes nausea or headache for example) that patients knew what they had been randomized to, and had a positive response. Or the study pits a placebo that actually has a positive effect but sounds like it might not, against the treatment, making the treatment look useless. You need to know a lot about a topic to interpret the research on that topic.


Hopefully we can all agree that coming to a conclusion about a course of treatment is fraught, time-consuming, and should not be done lightly. I invite us all to consider the consequences of our statements and to work hard to avoid being misled, or to explore a resource that you trust for their efforts to be accurate, before sending links and re-posting information. Once someone points out that your resource doesn’t make a serious effort to be accurate, please revise your sources of information.

In our social media environment, algorithms favor anything that has been shared multiple times, and it becomes its own sort of “truth.” Given the unequal distribution of education in our country, and the world, those of us with knowledge and ability to interpret data should make the effort to avoid skewing reality. We need to have methods to guard ourselves against cognitive bias, to have a framework by which we can proceed, to know and actively look for what would convince us that dearly held beliefs are incorrect, and to keep an open mind.

The entire quote about falsehood and truth, by Jonathan Swift, is poignant and quite relevant to our concerns:

“Falsehood flies, and truth comes limping after it, so that when men come to be undeceived, it is too late; the jest is over, and the tale hath had its effect: like a man, who hath thought of a good repartee when the discourse is changed, or the company parted; or like a physician, who hath found out an infallible medicine, after the patient is dead.”



Submit a Comment