PubPeer’s rarefied community of scientific detectives has produced an unlikely celebrity: Elisabeth Bik, who uses her uncanny acuity to spot image duplications that would be invisible to practically any other observer. Such duplications can allow scientists to conjure results out of thin air by Frankensteining parts of many images together or to claim that one image represents two separate experiments that produced similar results. But even Bik’s preternatural eye has limitations: It’s possible to fake experiments without actually using the same image twice. “If there’s a little overlap between the two photos, I can nail you,” she says. “But if you move the sample a little farther, there’s no overlap for me to find.” When the world’s most visible expert can’t always identify fraud, combating it—or even studying it—might seem an impossibility. Nevertheless, good scientific practices can effectively reduce the impact of fraud—that is, outright fakery—on science, whether or not it is ever discovered. Fraud “cannot be excluded from science, just like we cannot exclude murder in our society,” says Marcel van Assen, a principal investigator in the Meta-Research Center at the Tillburg School of Social and Behavioral Sciences. But as researchers and advocates continue to push science to be more open and impartial, he says, fraud “will be less prevalent in the future.” Alongside sleuths like Bik, “metascientists” like van Assen are the world’s fraud experts. These researchers systematically track the scientific literature in an effort to ensure it is as accurate and robust as possible. Metascience has existed in its current incarnation since 2005, when John Ioannidis—a once-lauded Stanford University professor who has recently fallen into disrepute for his views on the Covid-19 pandemic, such as a fierce opposition to lockdowns—published a paper with the provocative title “Why Most Published Research Findings Are False.” Small sample sizes and bias, Ioannidis argued, mean that incorrect conclusions often end up in the literature, and those errors are too rarely discovered, because scientists would much rather further their own research agendas than try to replicate the work of colleagues. Since that paper, metascientists have honed their techniques for studying bias, a term that covers everything from so-called “questionable research practices”—failing to publish negative results or applying statistical tests over and over again until you find something interesting, for example—to outright data fabrication or falsification. They take the pulse of this bias by looking not at individual studies but at overall patterns in the literature. When smaller studies on a particular topic tend to show more dramatic results than larger studies, for example, that can be an indicator of bias. Smaller studies are more variable, so some of them will end up being dramatic by chance—and in a world where dramatic results are favored, those studies will get published more often. Other approaches involve looking at p-values, numbers that indicate whether a given result is statistically significant or not. If, across the literature on a given research question, too many p-values seem significant, and too few are not, then scientists may be using questionable approaches to try to make their results seem more meaningful. But those patterns don’t indicate how much of that bias is attributable to fraud rather than dishonest data analysis or innocent errors. There’s a sense in which fraud is intrinsically unmeasurable, says Jennifer Byrne, a professor of molecular oncology at the University of Sydney who has worked to identify potentially fraudulent papers in cancer literature. “Fraud is about intent. It’s a psychological state of mind,” she says. “How do you infer a state of mind and intent from a published paper?” Because retractions are such an indirect measure of fraud, some researchers go straight to the source and poll scientists. Based on several published surveys, Fanelli has estimated that about 2 percent of scientists have committed fraud during their careers. But in a more recent anonymous survey of scientists in the Netherlands, 8 percent of respondents admitted to committing at least some fraud in the past three years. Even that figure may be low: Perhaps some people didn’t want to admit to scientific misdeeds, even in the safety of an anonymous survey. But the results aren’t as dire as they might seem. Just because someone has committed fraud once doesn’t mean they always do so. In fact, scientists who admit to questionable research practices report that they engage in them in only a small minority of their research. And because the definition of fraud can be so unclear, some of the researchers who said they committed fraud might have been following common practices—like removing outliers according to accepted metrics. In the face of this frustrating ambiguity, in 2016 Bik decided to try to figure out the extent of the fraud problem by being as systematic as possible. She and her colleagues combed through a corpus of more than 20,000 papers looking for image duplications. They identified problems in about 4 percent of them. In more than half of those cases, they determined that fraud was likely. But those results only account for image duplication; if Bik had looked for numerical data irregularities, the number of problematic papers she caught would probably have been higher. The rate of fraud, though, is less consequential than how much of an effect it has on science—and there, experts can’t agree either. Fanelli, who used to focus much of his research on fraud but now spends most of his time on other metascientific questions, thinks there’s not much to worry about. In one study, he found that retracted papers made only a small difference to the conclusions of meta-analyses, studies that try to ascertain the scientific consensus about a particular topic by analyzing large numbers of articles. As long as there’s a substantial body of work on a particular subject, a single paper typically won’t shift that scientific consensus much. Others, though, are more worried—Byrne is particularly concerned about paper mills, organizations that generate fake papers en masse and then sell authorships to scientists looking for a career boost. In some small subdisciplines, she says, fraudulent papers outnumber genuine ones. “People will lose faith in the whole process if they know that there’s a lot of potentially fabricated research, and they also know that no one’s doing anything about it,” she says. As hard as she and her PubPeer compatriots try, Bik is never going to be able to rid the world of scientific fraud. But, to keep science working, she doesn’t necessarily need to. After all, there are countless papers that are totally honest and also totally incorrect: Sometimes researchers make errors, and sometimes what looks like a genuine pattern is just random noise. That’s why replication—redoing a study as accurately as possible to see if you get the same results—is such an essential part of science. Conducting replication studies can mitigate the effects of fraud, even if that fraud is never explicitly identified. “It’s not foolproof or super efficient,” says Adam Marcus, who, together with Ivan Oransky, founded Retraction Watch. But, he continues, “it’s the most effective mechanism we have.” There are ways to make replication an even more effective tool, Marcus says: Universities could stop rewarding scientists only for publishing lots of high-profile papers and start rewarding them for conducting replication studies. Journals could respond more quickly when evidence indicates the possibility of fraud. And requiring scientists to share their raw data or accepting papers on the basis of their methods rather than their results would make fraud more difficult and less rewarding. As those practices get more popular, Marcus says, science gets more resilient. “Science is supposed to be self-correcting,” Marcus says. “And we’re watching it correct itself in real time.”