Imagine a bowl of crushed cherries. If we had that, imagine we could still somehow pick some whole, healthy cherry seeds out of that bowl. Then we take these fruits to show our friends, who are satisfied with confirming how well we harvest cherries, and how beautiful our trees are, even as a result of our gardening ambitions.
This situation, which at first glance seems simple, presents several problems. On the one hand, the portion of cherries offered may lead our unwary friends to the dangerous conclusion that the rest of this year’s crop may be of equally high quality. We may think that this specimen adequately represents the general quality of the cherry: not all the seeds may be so excellent, but the majority probably are. On the other hand, it may also happen that if they have just met a cherry for the first time in their lives, they will draw conclusions about the general nature of the fruit, which are perhaps more serious than in the first case. If we come across only excellent cherries and identify them as cherries, it may even happen that we will not identify other, less fortunate specimens of fruit as cherries.
The state of healthy and less healthy cherries is also of serious scientific interest. The so-called cherry-picking, which in Hungarian means cherry-picking, but I will call it an increase with a little confusion, is a common error in logic, reasoning, and theorizing. The basic problem in such cases is that someone focuses on and draws attention to those cases only from the body of evidence, theories or data (hereinafter referred to as evidence) that support his theory, and ignores the refutation evidence. Suppose, for example, that 99% of scientists agree with a given question, but 1% disagree or totally disagree. If we read an article about this case that claims that “many scientists question the accuracy of the theory,” but says nothing about what this means in terms of the number of scientists and the number of scientists who agree with the theory, then we can get the impression that scientists in general disagree with the theory. Therefore, upbringing is primarily a constructive tool for processing impressions.
In addition, this can also make a theory seem much more powerful than it actually is, or an incorrect theory can be believed to be true because of it. This is because the scientist in question ignores all other evidence while highlighting the supporting evidence, giving readers the false impression that his research findings are likely accurate. This strategy can mean selective presentation of data, opaque presentation or silence of methodology, highlighting or overemphasizing supporting case studies, emphasizing the importance of similar arguments supporting the research, or even—seemingly—listing supporting citations without context. So the situation is too complex to narrow the leverage down to a specific strategy, so correcting this error is also a complex task.
Motives, transparency and decisions
The situation is further complicated if raisin-picking is also examined along the lines of selection motivations, as this can reveal much about the decisions of individual researchers and the general nature of the science. However, drawing the boundary between ignorance and rational choice, that is, defining the essential component of the relevant refutation and the internal coherence of the research, and what is already irrelevant from the point of view of the particular case, leads to a quagmire. The idea that philosophers and scientists call idealism in many ways promotes a certain degree of transparency in science, which is primarily based on transparent communication of results, data, scientific methods, and potential decisions of value. However, the perfect realization of this entails the necessity of solving many theoretical problems. Kevin Elliott, American philosopher of science He argues about it in one of his worksEven the aforementioned transparency can be a non-neutral, value-driven method. According to Elliott, defining which areas of research researchers are transparent and which areas remain hidden, and in what ways this is done, can also run counter to the value-free idea of science. These decisions are made as a result of similar selection processes, as seen during grape picking, and often illuminate implicit or explicit values. In the most unpleasant of cases, it may even happen that the very strategies that seem to help achieve the required transparency undermine the potentially positive results of transparency.
In Elliot’s case study, he looks at the controversy surrounding Lyme disease. The initial problem is that some individuals with Lyme disease experience lingering symptoms after treatment with antibiotics. On the one hand, the debate is organized around the presence of prolonged Lyme disease, and on the other hand, why long-term antibiotic treatment is needed, which may be beneficial for its treatment. However, Lyme disease research, depending on what some researchers consider appropriate, the assumptions they use, the concepts of transparency they work with, or whether they admit or deny the existence of Lyme disease for extended periods, all influence research outcomes, as well as practically treatment. Sick ways.
Of course, if a researcher believed there was no such thing as prolonged Lyme disease, he would not take reports of it seriously and, say, display it in publications as evidence against it. Along these lines, it can be seen that the choice of information can also be interpreted as a series of non-neutral philosophical decisions. In other words, defining the boundaries of science can also be linked to what we consider to be part of a particular piece of research. On the other hand, it can be difficult if the physician examines and considers only those cases of patients’ personal reports that do not support the presence of prolonged Lyme disease, and thus long-term antibiotic treatment becomes unnecessary. This gives clear confirmation of cases that do not support the presence of prolonged Lyme disease. However, it may also be troubling that researchers generally ignore patients’ personal reports, as they are generally seen as an unreliable source.
The ethical responsibility of the researcher
The upbringing therefore has both practical and philosophical aspects, which are also framed by the question of the researchers’ ethical responsibility. Everyone would probably agree that if Researcher X consciously selects, and then chooses, the evidence to support him, then Researcher X is responsible for publishing misleading results, which is detrimental to science. The motives for this can be different: pressure to be published, perhaps the hope that perhaps the examples to refute are not so important, because either someone else will refute them, or simply silence appears as a rhetorical device: if we don’t talk about it maybe others won’t understand it either. In the worst case, political and ideological motives can also color the picture, or the goal of undermining consensus belief. Emphasizing little or seemingly contradictory evidence or manipulating data may be able to convince uninformed people of the validity of an untenable position. One such case is the raisin strategy of global warming deniers: How can the Earth warm if it’s still very cold in April?!
On the other hand, if the researcher is not consciously picking raisins among the results, but may not have been familiar with the works that disproved his theory, the situation is morally more complex. What exactly is involved in such cases? If the researcher does not do a sufficiently careful work, does not consciously search for rebuttal examples, and does not come across any rebuttal examples (that is, he was unlucky), then it is not clear whether the researcher is manipulating or simply doing so. You are not doing a precise enough job.
If the researcher does not consciously adhere to picking raisins, it can also be regarded as a kind of basic psychological characteristic of a person. When I work frantically on a topic, it so happens that things related to the topic appear in every corner of the world: related books, articles, conferences, examples, arguments, analogies, and so on. If I am interested in the role of objects in the formation of knowledge, I may happen to notice everywhere the varied ways in which the use of designed objects determines human behaviour. If I were convinced that there was no such thing as free will, I would probably see cases everywhere that tended to support it. The difficulty lies in the fact that during the flow of research, it is sometimes difficult to determine what is really relevant and what is causing a feverish work ethic. In addition, logically speaking, it cannot be ruled out that whatever I consider to be really relevant, it just so happens that it is not to that extent, or that I do not identify the less relevant in the first place, so that I can develop a false image that my subject is really in the middle of the world. The opposite of the situation is true for rebuttal examples: perhaps my interest is less focused on analyzing works that seem irrelevant to me. So it seems that avoiding excitement is not an easy task, both in theory and in practice. Another reason for this is that there is a high probability that we will not be aware of the blocked data. Ignorance can only become apparent if we already or previously knew about it, or through targeted research we find counter-examples that we can prove their importance. It is difficult to explain what we are not aware of.
Does it increase if only successful results are reported?
In addition, grape picking, whether intentional or unintentional selection, can be linked to other value-driven selection principles, and implicitly to drawing the boundaries of research or a specific discipline. So analyzing this phenomenon can also lead us to examine value-driven approaches in the natural sciences.
In addition to the above cases, and as the problem is put into a broader perspective of scientific knowledge production, the doubts only multiply. Is it inevitable, for example, that the wider community and circle of experts encounter only a small slice of the scientific work, and that we do not represent the whole picture? In other words, is science communication also sloppy when it mainly talks about only successful and exciting results and not about difficulties, internal debates, or failed experiments? If we examine publishing practices, in general it can be said that we read mostly about successful experiments and less about publications about failed hypotheses in high-profile journals. However, in the long run, this may also contribute to a biased picture of the real performance of scientific knowledge production. The truth is that even scientific knowledge—which we consider to be our most reliable way of creating knowledge—is not always immediately successful. There are also many failed experiments, false assumptions, and other errors caused by technical devices in science, which, in many cases, can even contribute to a solution in the long run. In other words, it is possible to imagine that it is precisely a series of errors that lead to an understanding of some phenomena. So the question remains: What do we do with the half-rotten cherries of science?
The author is a member of the MTA Lendület Values and Science Research Group and Assistant Professor in the Department of Philosophy and History of Science at BME.
Related articles on Qubit: