As stated in the introduction to basic ideas about science, the sociology of science provides an understanding of science that is different from what is taught in schools. Science sociologists study scientists, how they interact, and how the public interacts with them. They tell us aspects of scientists’ behavior that we would not normally learn in science courses, which focus on the end results of scientific research (ie. discoveries) rather than the relationships among those doing the research.
Sociologists who study scientists can say a lot about them that may surprise you. Here is only a sample:
Scientists operate within a cultural context. Scientific activity includes not only the facts, but also the reasons why those facts are published in papers rather than other facts. Students who desire to become scientists are affected by familial, geographic, religious, and political factors. They enter specific fields rather than other fields. The number of scientists entering those fields affects the number of facts coming from them, and the culturally-influenced values held by those scientists affects the types of hypotheses tested and the degrees to which other scientists perceive those hypotheses as interesting and worthy of discussion.
Feminists argue that culture is responsible for the physical sciences being comprised of mostly men, and for the social sciences being comprised of mostly women. They say these proportions lead boys to prefer the physical sciences, and lead girls to prefer the social sciences, perpetuating an imbalance between men and women in these sciences and biasing research in a way that reflects this imbalance.
Scientists believe some answers are more important than others. The answers they prefer to have determine which questions they prefer to ask and which hypotheses they prefer to make. Thus, scientists ask certain questions and test certain hypotheses rather than others, which are influenced by their values.
Since their values guide their research, then the research that they publish will reflect their values. This means that there is a great deal of research never published in part because it did not reflect the values of scientists.
Yet, sometimes a scientist has research published even though other scientists do not value it. If few scientists believe that research to be important, then they will not likely share it with the public. Thus, the science that the public is aware of is the science that reflects the values of scientists.
As feminists have argued and science sociologists have discovered, a culture affects the beliefs and perspectives of its members. Anti-religious atheists love evolution. Buddhists are attracted to neuroscience. Women are more likely to become social scientists than physical scientists. Alternate ways of thinking lead to alternate questions and hypotheses, and alternate questions and hypotheses leads to alternate facts. The reality that the facts obtained exist in part due to the values of scientists does not mean that these facts are false. Perhaps they are true. But, values guide scientists such that one body of knowledge is obtained rather than another.
The values that influence scientists the most are probably political and religious values, because these values influence most human beings greatly. Political and religious views influence our behavior and interpretation of data, and scientists–being people–have political and religious views.
Practically all surveys inquiring about political affiliations offer participants only Democrat, Republican, and Independent, or conservative, liberal, and moderate as options. Practically never is libertarian an option, and I have never seen socialist, communist, progressive, or anarchist as options. If scientists ask only about those viewpoints, then only those will be represented in the science. This leads the majority of the public to believe that those are the only views available to select from when considering adopting a position.
Below are some statistics concerning political views:
Political Affiliation source
Scientific Field Independent Republican Democrat
Biology/Medicine 31 6 58
Chemistry 37 9 49
Geoscience 25 4 62
Physics/Astronomy 35 6 53
In 2003, among anthropologists and sociologists combined, there were 21 times more Democrats than Republicans. [Klein]
A summary of surveys of social science professors given from 2005-2009 found the following [Duarte]:
- Liberal 58-66%
- Conservative 5-8%
- 8 times more Democrats than Republicans
…and among psychologists
- Liberal 84%
- Conservative 8%
Further, there is evidence that this liberal bias within psychology is a threat to its integrity. For example, in 1991, psychologist Ronald Fox publicly encouraged colleagues to fight for radical left-wing positions, and to mix values with data with the intent to argue for the change they desire [Redding].
Dr. Fox became president of the American Psychological Association in 1994.
An example of political affiliation directly influencing research can be seen in a 2010 study of economics professors. The authors are libertarian, and not coincidentally their study included ‘libertarian’ and ‘green’ as options (perhaps they included ‘Green Party’ as an option because they empathized with members of third parties). Here are the results:
% Party Affiliation Among Economics Professors
- Green 1.7
- Libertarian 5.4
- Republican 20.7
- Democrat 56.4
Values pertaining to religion influences science. There is an unwritten rule–widely accepted by scientists–to oppose the publication of literature that endorses or is interpreted to endorse the existence of God, but to permit the publication of literature that does the opposite. For example, there was backlash when this paper appeared to advocate the existence of a Creator.
Another paper, however, explicitly stated that a particular study’s findings “solidly refute all parts of the intelligent design argument” [Adami]. Not only was this paper published without backlash, but it even contains an obvious contradiction that confirms the existence of bias. This contradiction occurs when the author claims that those “alternate ideas [intelligent design], unlike the hypotheses investigated in these papers, remain thoroughly untested. Consequently, whatever debate remains must be characterized as purely political”. The paper states that an idea interpreted to support the existence of God is not testable, and also states that this idea has been tested. This bias indicates that an idea appearing to support the existence of God is testable…insofar as the idea is refuted.
This bias does not mean every scientist has the same views about religion. In 2009, both social and non-social scientists were asked for their opinions about religion [Ecklund]:
- 69% accepted some religious teachings.
- 68% said they were spiritual.
- 64% did not perceive a conflict between scientific knowledge and religious knowledge.
- 22% believed their peers viewed religion positively.
- 45% believed their peers viewed religion negatively.
After scientists write a paper, they send it to a journal to be published. Before publishing, the journal hires others in the same field, usually two to three, to read the paper. The main duty of these ‘peer reviewers’ is to provide their opinion as to whether the paper should be published, and sometimes they give suggestions to the authors for how to improve the manner in which it was written. If the peer reviewers like the paper, then they give it to the editor of the journal for final approval. The same work can be shared via spoken word, lectures, seminars, documentaries, or books without first being peer reviewed.
The obligation to have a paper reviewed prior to being published in a journal is based on historical social conditions. Before internet, television, and radio, the only way to share information widely was through writing; scientific knowledge could be spread only with paper. But paper was not free, and a journal can fit only so many pages. Publishers could not publish literally everything submitted. They had to pick and choose.
This process became the norm, but this norm is becoming obsolete because scientific knowledge today can be shared widely without the use of paper. There is, for all practical purposes, unlimited space for this information.
Scientists believe that peer review helps prevent bad papers from being published. This belief has been tested, but not confirmed.
A small part of the sociology of science is scientometrics: the study of the number of papers scientists publish, how often these papers are cited, how citation impacts scientists and their careers, and trends in scientific careers.
Communication and Social Construction
‘Social constructivism’ is a smarty pants term for ‘made’. The idea of social constructivism is that evidence and facts are made, created, constructed. They do not fall from the sky. They are not given by the gods. Evidence and facts are made because they are shaped by the vocabulary and images used to communicate them.
This is the reasoning behind social constructivism:
How we communicate affects how we think, and how we think affects how we communicate.
We communicate through words and images.
Different words and images–vocabulary, drawings, graphs, models–communicate ideas differently. The vocabulary and images that scientists use affects their perception about ideas, and their perception affects the vocabulary and images they use to communicate those ideas.
Scientists’ perception influences their behavior, and their perception is influenced by vocabulary and images. Thus, vocabulary and images affect scientists’ behavior.
Scientists’ behavior determines how they interact with what they study. How they interact with what they study determines what they experience–the evidence they acquire and the facts they publish. Thus, the vocabulary and images scientists use influence the evidence and the facts.
This does not mean that evidence and facts are fake, false, or fictitious. If something is made, then it exists! It is the perception, understanding, and communication about them that is made by humans, but that which the evidence supports and which the facts refer to may be real independent of our communication. For example, the words ‘chi’ and ‘bioelectricity’ probably refer to the same thing. This thing is not (purposely) constructed by us. But, the way we talk about, and consequently our perception and interaction with it, is.
Silencing: Censorship and Stonewalling
Two psychologists, Douglas Peters and Stephen Ceci, tested the hypothesis that peer review was reliable. Their experiment showed it was not. Consequently, they were harassed. One editor threatened to sue for copyright violations. Some threatened to censure them or reject their colleagues’ work. Two journals decided against publishing their paper, but decided for publishing reviews of that paper laden with personal attacks. There was nothing anti-scientific about their research. It was only the conclusion to that research that several scientists wanted to block from being publicized.
It is commonly said that scientists care only about evidence and facts. Predictably, it is scientists–desiring to earn trust–who commonly say it. However, given that science is a human activity, and given that in any kind of human activity there will be politics and biases, then it should not be shocking that politics and biases exist within science.
Sometimes scientists will have their views silenced, which is a very broad word describing how scientists are unjustly impeded in their efforts to advance their research. Silencing a scientist may involve lying, which includes mischaracterization, manipulation of data to contradict a hypothesis, outright fraud, omission of relevant facts, and arbitrarily changing requirements for publication of a paper.
When scientists are silenced, then their research is unlikely to be widely known. This places them in a minority position, and silencing often involves discouraging the public and other scientists from taking their research seriously precisely because it is held by only a minority. Those who support learning and thinking critically about such research may be harassed, ostracized, and subjected to personal attacks. At the place of employment, a minority scientist can be silenced by being punished with multiple trivial duties, transferred to a less important department, or fired. When the scientist is fired, the employer practically always claims that they were not fulfilling their duties at work. This claim helps to avoid losing a lawsuit for unlawful termination.
Censorship is a specific form of silencing, and involves overtly banning scientists’ work. This happened to molecular biologist Peter Duesberg, who was prevented from replying to criticisms against his research pertaining to AIDS.
Stonewalling is another form of silencing. Stonewalling is repeatedly refusing to cooperate with a person being censored. Letters and emails go unanswered, requests to speak at conferences are ignored, or there are delays in providing requested documents concerning lawsuits for unlawful firing.
One of the most harmful examples of silencing concerns medical testing on mice. There are numerous instances of scientists since 1879 [Moran] stating that medical tests on mice do not reliably inform them about how humans will respond to those same tests . Nevertheless, a large amount of medical research is still conducted involving the use of mice, and many scientists and doctors behave as if the results of this research are applicable to humans.
So, silencing, censorship, and stonewalling indicate that value judgments affect which science is allowed to be made public, and which scientists are permitted to continue their careers as part of that particular community. A consequence of these behaviors is that graduate students and scientists early in their careers may feel pressured to conform to established views and avoid correcting older colleagues because they fear having their careers harmed as retribution. There is also an increased likelihood that papers containing groundbreaking discoveries will be rejected for publication, not because the methodology is flawed, but because the reviewers of those papers personally dislike the researchers and/or their conclusion.
Cooperation and Competition
“Science is a matter of both cooperation and competition… The most important sort of cooperation that occurs in science is the use by one scientist of the results of the research of other scientists… Scientists want their work to be acknowledged as original, but just as importantly, they want it to be accepted. To get it accepted, scientists must gain support from other scientists. One way to gain support is to show that one’s own work rests solidly on preceding research, but the price of this support is the decrease in apparent originality. One cannot gain support from a particular work unless one cites it, and this citation automatically confers worth on the work cited and detracts from one’s own originality.” [Hull]
As with business persons, scientists both work together to reach a goal as well as battle against each other to get there first. Being people, scientists desire to contribute to society, generate income for themselves, and gain status and recognition. These desires get mixed into the entire scientific enterprise, leading some research to be immensely beneficial to others and other research to be of little use to anyone but the researchers and their social or financial status.
Just as there is no single scientific method, so also there is no single scientific community. There are multiple communities of scientists interacting with one another, but not interacting with scientists in other communities. The cooperation among scientists tends to be limited to those within their fields, and those fields are often highly specialized. Thus, a scientist can develop advanced knowledge about a specific issue, but lack knowledge of how their research relates to research in other fields. This scientists will not connect the dots and see the bigger picture.
However, there is still some communication among scientists, even if they work for competing employers. Scientists who may be competitors might attend the same science lectures, and ideas would be shared among both associates and competitors.
Given this interaction between allies and opponents, there is both praise and ridicule within science communities. Scientists administer awards for discoveries, and also occasionally mock another scientist’s work or idea.
Adami, C. (2006). Reducible Complexity. Science, 312 (5770), 61-63.
Ceci, S. J. & Peters, D. P. (1982). Peer review: A study of reliability. Change, September, 44-48.
Davis, William, L, Bob Figgins , David Hedengren, and Daniel B. Klein. Economics Professors’ Favorite Economic Thinkers, Journals, and Blogs (along with Party and Policy Views). Econ Journal Watch Volume 8, Number 2, May 2011, 126-146.
Duarte, J. L., Crawford, J. T., Stern, C., Haidt, J., Jussim, L., & Tetlock, P. E. (2015). Political diversity will improve social psychological science. Behavioral and Brain Sciences, 38, 130.
Ecklund, E. H., & Park, J. Z. (2009). Conflict between religion and science among academic scientists?. Journal for the Scientific Study of Religion, 48(2), 276-292.
Hull, D. (2001). Science and Selection: essays on biological evolution and the philosophy of science. Cambridge University Press. p. 101
Klein, Daniel B, and Charlotta Stern. Policy Views of Social Scientists. Critical Review. 17, no. 3-4 (July 2005): 257-303.
Moran, Gordan. Silencing Scientists and Scholars in Other Fields: power, paradigm controls, peer review, and scholarly communication. Ablex Publishing, 1998. P. 49
Redding, R. E. (2001). Sociopolitical diversity in psychology: The case for pluralism. American Psychologist, 56(3), 206.
Dalton, R. (2004). Review of tenure refusal uncovers conflicts of interest. Nature, 430, 598.
Dewitt, Richard. Worldviews: an introduction to the history and philosophy of science. Blackwell, 2004.
Goldman, Stephen, L. Science Wars: what scientists know and how they know it. The Teaching Company, 2006. DVD.
Faye, Jan. Rethinking Science: a philosophical introduction to the unity of science. Ashgate Publishing Company, 2002.
Fuller, Steve. The Philosophy of Science and Technology Studies. Routledge, 2006.
Kuhar, M. J. (2008). On blacklisting in science. Science and Engineering Ethics, 14, 301–303.
Martin, Brian. Enabling Scientific Dissent. New Doctor (88), December 2008. PDF
Restivo, Sal. Science, Society, and Values. Associated University Presses, 1994.
Schuster, John. An Introduction to the History and Social Studies of Science. 1995. PDF