MORE FROM GUTENBERG

Is a Revolution in Science in the Offing?

During the summer, Chris Swanson, a fellow tutor at Gutenberg, passed along to me an article from The Economist: “Brought to book: Academic journals face a radical shake-up” (July 21, 2012). The article claimed that academic journals are facing a radical shake-up because the British government announced on July 16th that starting January 1, 2013, all taxpayer financed research would be available, free and online, for anyone to read and redistribute. The reason given for the new law was that journal publishers are seen as an impediment to scientific progress. It is true that the journals are facing a radical shake-up, but this new law also amounts to a potential revolution in science.

According to the article, the criticism of the journal publishers boils down to two things. The first is that it takes months to get a paper through their process, when the results can be published in days or hours on the internet. The second criticism is that the publishers have a monopoly, and they can and do charge any amount of money they want for their journals. They can get away with this because scientists have to have access to the published work. Scientists must read the current journal articles to stay current and place their work in the context of other ongoing work. So, for example, the article pointed out that Elsevier, a large Dutch scientific publisher, made a profit margin of 37% in 2011. (Even academic science publishing works along principles articulated by Adam Smith).

The critics of the new law claim that the publishers perform a vital part of the scientific process: they provide peer-review—critical, sometimes anonymous, reviews by other experts in the field. Peer review is the scientific version of sorting sheep from goats. Reviewers determine what gets published and what does not. In addition, they determine the worth of a scientific paper. The more prestigious the journal someone’s work appears in, the better and more important the work is seen to be. Determining the worth of a scientific paper is seen by the critics as an important function of scientific publishing. Therefore, the journals play a critical role in science.

But how will science papers be peer-reviewed in Britain after January 2013? Currently three options are being evaluated, and numerous other possibilities are being considered. One option is the “gold model,” a business model in place in the U.S. A non-profit out of San Francisco started the Public Library of Science and charges a fee from $1,350 to $2,900 to make papers available for free over the internet. A second option, the “green model,” is currently in use by the National Institute of Health (NIH). In this model, papers are published in peer-reviewed journals the same as always, but they have to be published for free on-line within a year. A third option is for scientists to publish their papers on-line in public archives paid for by a network of universities. In addition to these three options, numerous others have been suggested.

In the end, it will be interesting to see what option or options is/are selected. For over a century, science has been seen as a privileged endeavor because of its method and peer-review. It has been seen as resulting in more certain and objective conclusions than other endeavors. Peer-review has also served as a gate-keeper for making sure that perspectives outside acceptable limits of science are not given a platform for discussion. If an option leads to science papers being published on-line on public sites without peer review or “gate keepers,” then this will result in a significant revolution in science. It will be interesting to watch this development as it unfolds.

 

Einstein and Gödel

Each period of history in Western culture has developed its own distinctive spirit, or zeitgeist. The zeitgeist of the Middle Ages involved an orientation toward the hereafter. That of the Renaissance involved a celebration of human ability and ingenuity. In every case, the zeitgeist exercises a powerful influence on the culture, the ideas and intellectual fashions of the day, often without even being perceived. The zeitgeist of the early twentieth century was particularly powerful, so much so that it completely misinterpreted two of its leading intellectuals: Einstein and Gödel.

Albert Einstein (1879-1955) is a household name, famous for his theory of relativity. He was a physicist who changed our notions of time and space. His theories may not be easily comprehended, but our culture reveres him as the paragon of intellectual achievement. His most significant contributions were made in his early years in Switzerland and Germany. In his later life, due to his Jewish origins, Einstein emigrated to the US and joined a newly created think tank, the Princeton Institute for Advanced Study.

Kurt Gödel’s (1906-1978) name is less well known, but his influence on mathematics, logic, and the theory of knowledge was profound. He challenged some deeply rooted assumptions about the nature of mathematics and proof. Specifically, mathematicians of the time believed that any mathematical theorem could, in principle, be derived from a set of fundamental axioms pertinent to that area of study. Further, they assumed that any axiomatic system (a set of theorems derived from the axioms of the system) could be shown to be free of contradiction. Gödel’s work showed that both assumptions were wrong.

Gödel shared much in common with Einstein. He, like Einstein, made his most important contribution in his early years. He also, as a result of World War II, emigrated to the US. And to complete the picture, he became one of the most celebrated members of a new think tank, the Princeton Institute for Advanced Study.

The parallels do not end there. Gödel and Einstein became fast friends, despite having radically different personalities. They would often take walks on the Institute grounds and discuss esoteric mathematical and physical theories. Each seemed to feel that he had found in the other an intellectual soul mate.

Gödel himself, in an essay entitled “The Modern Development of the Foundations of Mathematics in the Light of Philosophy,” provides insight into why he and Einstein were so compatible. Gödel divides up philosophical systems “according to the degree and the manner of their affinity to or, respectively, turning away from metaphysics (religion).” He designates those philosophies which turn away from religion and metaphysics as “leftward.” Philosophers of this persuasion see no grand purpose, no deeper meaning. Life is material, and truth is relative. Philosophies which turn toward religion and metaphysics are “rightward” leaning and tend to find order and meaning in everything. These philosophers believe we are capable of finding meaning and that truth is knowable.

Gödel observes that Western culture since the Renaissance has been moving from right to left. Scientists, for instance, see the universe more and more as a cosmic accident subject to impersonal laws. Mathematicians have tended to examine more esoteric topics which have no intrinsic meaning or application. Both Gödel and Einstein were keenly aware of this trend. They were immersed in a left leaning intellectual culture. While to some extent they were influenced by that zeitgeist, their deep intellectual camaraderie came from their mutual rejection of the leftward sprit of the age.

The spirit of the age, however, would not be denied. Despite Einstein’s and Gödel’s rejection of skeptical materialism, their work has been hailed as ushering in that very view. Einstein’s work was called the “theory of relativity.” The name itself indicates a leftward bias. (Interestingly, it was not Einstein who gave his theory its name.) But Einstein was insistent that his theory was speaking about reality, that it uncovered the mathematical structure of the universe, and that it was true. Einstein said that a rejection of metaphysics and truth was a mistake. The culture, however, focused on the implications. If such fundamental notions as time and space are “relative,” then all our fundamental beliefs must be revisited. All knowledge, it was argued, was relative. This was a gross misinterpretation of Einstein.

Gödel’s famous theorem is called the “incompleteness theorem,” another leftward name. But it was written in response to the leftward direction that mathematics had been taking. Mathematics had been slowly transformed from a subject about reality to an intellectual exercise in which conclusions are drawn from meaningless assumptions. All that mattered was to make careful deductions according to the mathematical and logical rules. Through these deductions, mathematicians found that systems of great beauty could be erected upon the foundation of a small set of axioms. A mathematical system was no longer true because it represented reality in some way. It was true if there were no logical inconsistencies inherent in the system. Gödel showed that such a sterile conception of mathematics is doomed to failure because the new criterion for “truth” was not achievable. Specifically, mathematical systems as complex as arithmetic could not be proven to be free of inconsistencies.

Gödel says in his essay that such a result is not a blow to mathematics or to logic or to our ability to prove anything. It merely shows that mathematics cannot be reduced to a wholly independent system of meaningless rules. Gödel believes in proof and knowledge and logic. He is simply rejecting the leftward conception of mathematics as a game of formal rules.

How, then, has Gödel been received? As with Einstein’s work, the culture has interpreted the incompleteness theorem as a death blow to all rightward philosophies. They claim the theorem has proved that God cannot exist (I will not go into details). Others have said that Gödel has shown that nothing can be known with certainty.

Both of these great thinkers were misunderstood. They both believed that they made truth and order graspable. But the spirit of the age was not to be resisted. Relativity theory and the incompleteness theorem have been seen as bastions of the philosophical left. Surrounded by such a culture, it is no wonder the two became fast friends.

 

Of Salmon and Winds

You don’t need a weather man to know which way the wind blows. —Bob Dylan

In the 1970s, when I first started working with trout and salmon in the Pacific Northwest, there were a lot of things I did not know about them. Over the years, I have come to understand quite a bit about their life-history strategies—that is, why they are where they are at each stage of their lives. One major piece of the puzzle has eluded me for a number of years, however, and that is how the fish know when to go to the ocean. The question seems simple enough. Almost all of the trout and salmon found in streams in the Oregon Coast Range make their way to the ocean at some point. But when do they decide to go?

A lot is riding on the fish getting it right. It appears that the size of a salmon population is often determined within the first two weeks of the fish entering the ocean. (In some years, a small number of juveniles [smolts] enter the ocean from the streams. In those years, the run will be small no matter what the conditions are when they enter the ocean). They have to adjust to a new environment, new food, and new predators. The fish want to enter the ocean when food first becomes abundant in the ocean, but the window of best conditions for them to enter the ocean is small. If they are too early, they will either starve or not grow fast enough to escape predators. If they are too late, a host of predators will have grown larger with the abundant food supply, and the salmon cannot grow fast enough to escape predation. In order to survive in the ocean, salmon must grow faster than their predators and ultimately become one of the large predators themselves.

But when is this window and how long does it last? It occurs sometime between late March and July, and it never happens at all in some years. When the window occurs is largely determined by the winds. During the winter, the winds are out of the south; and from November to March, the Oregon Coast receives the majority of its 120 inches of rain a year. During the spring transition period, the winds will be out of the north between storms but out of the south during storms. Once the northerly summer pattern is established, the winds are strong and predictable, which allows for “upwelling”—near-shore currents that bring nutrient rich water to the surface. Algae and zooplankton populations boom, which then attracts small bait fish, salmon, and predators. But how do salmon in the streams know when upwelling occurs in the ocean?

During the spring transition period, the streams rise and the water temperature falls with each spring rain. Once the spring transition period ends, the spring storms end. The stream flow rapidly drops, and stream temperatures rise with the sunny days. I believe that the juvenile salmon are instinctively migrating out of the streams when they sense that stream flows are getting low and the stream temperatures are rising. This coincides with the window of upwelling in the ocean.

This summer, I intend to examine three pieces of information to see if they corroborate this story of early salmon survival in the ocean. One piece of information I will examine is the number of salmon smolts going to the ocean. I, along with other interested people, have been running a fish trap near the mouth of a small Oregon Coast Range stream near Mapleton for over twenty years. Daily, from March to June or July, we estimate how many trout and salmon smolts are heading to the ocean. We know what days they migrated out of the stream and the stream flow for each day. The second piece of information I will examine is the long-term wind record for Oregon Coast airports. And the third piece is information on upwelling that I am getting from a retired Oregon State University oceanographer; it includes physical and chemical information, such as temperature and nutrient levels and estimates of algae and zooplankton populations. I look forward to sitting down and working through this information this summer.

 

[Last weekend was Mother’s Day, and I dedicate this post to my mother, who provided me with opportunities and encouragement to pursue my love for investigating trout and salmon.]

 

Rachel Carson, DDT, and Malaria

In an earlier post, I listed what I consider the eight most important writings in environmental ethics. I included Rachel Carson’s Silent Spring as one of the eight, but I noted that it was controversial. That controversy is the subject of this post.

Silent Spring, published in 1962, questioned the indiscriminate spraying of DDT, an insecticide, in the U.S. It questioned the logic of releasing large amounts of chemicals into the environment without understanding where these chemicals go and what their effect on human health and the environment is. The result was the rise of the environmental movement, and DDT was ultimately banned in 1972.

Since it was published in 1962, Silent Spring has received mixed reviews by scientists. In an editorial in the New York Times titled, “Fateful Voice of a Generation Still Drowns Out Real Science,” John Tierney expresses as clearly as anyone the view of scientists who do not agree with Rachel Carson. As he sees it, the human costs of banning DDT were horrific in poor countries when malaria increased after the ban. He believes the DDT ban brought about by Carson’s book substantially increased human deaths. In his view, banning DDT was reprehensible, and the science expressed in Silent Spring is bad science.

While I agree it was wrong to ban DDT (Rachel Carson also did not agree with the ban), I do not agree with the position articulated by John Tierney and others scientists. I will explain why.

DDT was used extensively during World War II to de-louse soldiers, and it was used to control mosquito populations, especially in malarial zones. It was cheap, effective, and considered safe. DDT is effective against the species of mosquito that carry malaria. How then could anyone object to using DDT to try to eradicate malaria or to the science that developed and tested it?

I will begin by listing the detrimental features of DDT when it is used to try to eradicate mosquitoes that carry malaria:

  • DDT is a wide-spectrum insecticide. It kills the good insects as well as the bad. For instance, it kills virtually all species of aquatic insects that are the primary food source for fish. On land, it kills virtually all insects, such as bees, that pollinate plants. In short, it can disrupt food supplies.
  • DDT and its break-down product, DDE, last for decades in soils and streams. Once applied, it can remain viable for decades.
  • DDT “bio-accumulates” as it passes up the food chain. DDT sprayed at a low concentration in water is taken up by algae and passed up through zoo-plankton to fish and then to mammals and birds. DDT that is eaten is stored in fat tissues and in the milk of mammals. Birds and mammals, including man, can have concentrations of DDT in fat tissues and milk that are ten to a hundred times higher than the initial concentration sprayed. For instance, in 2005, the Center for Disease Control reported that DDT was still found in the blood of virtually all U.S. citizens, although at a lower concentration than the previous decade, even though DDT was banned in 1972. Also, the levels of DDT in salmon in the Columbia River are high enough that pregnant women are warned not to eat more than one serving per month of Columbia River salmon.
  • DDT is classified as moderately toxic in acute toxicity (one-dose) tests for humans and the environment. To test for “safe” levels of DDT in water, varying concentrations of DDT are added to a series of aquaria with an equal number of fish in each. The experiment is aimed at determining the DDT concentration when fifty percent of the fish die. This is DDT’s “D50” concentration. A safety margin is added to the D50 concentration, and the resulting concentration is established as the safe concentration of DDT in water. This is the standard method for determining safe levels for any chemical in water. For humans, similar tests were conducted with rats in cages.
  • DDT has been linked to diabetes in chronic (continuous low doses) toxicity tests (e.g., http://www.ehponline.org/docs/2009/0800281/abstract.html).
  • DDT is less successful in eliminating the mosquitoes that carry malaria in the tropics, where they can breed year-round. Using DDT in the tropics thus leads to strains of mosquitoes that are resistant to DDT.

John Tierney and a number of scientists believe the benefits of DDT spraying outweigh these detrimental effects. I am not so sure. While I acknowledge that spraying DDT can lead to short-term declines in the mosquitoes carrying malaria, I am not sure that even the short-term benefits are worth it. Spraying DDT will cause increased reproductive problems, and high levels of DDT in mother’s milk will increase infant health problems. DDT can also potentially disrupt food supplies.

Furthermore, the long-term effects of continued DDT spraying will result in mosquitoes developing a resistance to DDT; it will no longer be effective. And that is a very serious problem. DDT should be saved in reserve for periodic serious outbreaks of malaria. It should not be used to try to eradicate malaria; not only will it  not be successful, but it will become ineffective against malarial mosquitoes when we do need to use it.

In the end, I believe that which side of this issue you are on depends largely on your philosophy of science. To the proponents of spraying, like John Tierney, science has made great strides in understanding nature. We have the necessary knowledge to control nature for the benefit of man. The scientific tests which isolate the levels of DDT from all other factors to determine the acute toxicity levels are viewed with objectivity and certainty. With this information, then, we can confidently move forward.

I am not so optimistic. I believe that nature is more complex than we can ever know. I am not so confident that we really understand the effects of DDT on humans or the environment. While I acknowledge that the acute (one-dose) toxicity tests are useful to get a general idea of the toxicity of a chemical, I find the chronic (long-term low-dose) toxicity tests less compelling. Over what range of concentrations should the tests be run and for how long? However, my biggest concern with chronic toxicity tests is that they don’t test the interaction between DDT and other pesticides. What is the interaction between DDT and other pesticides at low, long-term doses? Thousands of chemicals have been developed and used daily since World War II. How do they interact with each other? Also, what compounds do they break down into, and how do these chemicals interact with each other? We have little or no information about most of these interactions. Without this understanding, it seems to me that our knowledge is pretty limited with regard to pesticides and their effects. In short, as an ecologist, I do not believe that these simple reductionistic laboratory tests can deliver the knowledge necessary for us to confidently control nature for our benefit.

 

Critique of Christianity is Uncompelling

On March 11, 2012, the Opinionator, an online commentary from the Opinion Pages of The New York Times, published a dialogue between Michael Lynch (a philosophy professor) and Alan Sokal (a physics and mathematics professor) titled “Defending Science: An Exchange.” Michael Lynch initiated the dialogue by making a point about what often gets lost in the culture wars: the debate over evolution isn’t just about evolution; it is a debate about “first principles.”

Alan Sokal claimed that he could fairly easily answer the challenge of fundamentalist Christians and religious people in general, his point being that while they have the same epistemic starting points as everyone else in everyday life, they supplement the ordinary epistemic principles with additional principles like “This particular book [the Bible] always tells the infallible truth.” He then asks, “Why this particular book?” and provides evidence that the Bible has so many internal contradictions that no one could possibly consider it infallible. He then claims by contrast that science is justified in using the general epistemic principles that we all share. Therefore, in Sokal’s view, everyone should be able to understand the outlines of his argument. He attributes the major fact that most citizens don’t buy his argument to “a major scandal concerning the teaching of science.” Sokal is encouraged by a few fundamentalist Christians, like Bart Ehrman, who after having studied the Bible’s “internal contradictions and the history of its composition” realized that fundamentalist Christianity was untenable and he abandoned it.

Sokal’s argument is highly flawed.

Let me begin my critique of Sokal’s argument by articulating my own first principles and my reasons for starting with these principles. My first principles are all those basic beliefs that are necessary to make knowledge possible. No one doubts that we have knowledge, so as I see it, philosophy’s object here is to articulate what is necessary to account for our knowledge. To be clear, I accept Scottish philosopher Thomas Reid’s “Principles of Commonsense.” Reid (1710-1796) argued that the principles of commonsense are those beliefs that any sane human being must accept to make knowledge and communication possible.

For instance, one such principle is “credulity”—that is, I believe what a speaker is telling me unless I have grounds to be suspicious. In other words, I assume that speakers are trustworthy, and I only question them if my suspicions are aroused. This principle can be extended to writers. So, I believe that Alan Sokal thinks he has shown that fundamentalist Christians and other religious people can be critiqued easily and, furthermore, that he has done so infallibly. If I do not accept the principle of credulity, no communication can take place here. If I question whether Sokal is trustworthy and whether he believes his argument is true, then what’s the point? I can’t prove that he is trustworthy from the text. In addition, I accept the principle of authorial intent. I believe that Alan Sokal is making an argument against a fundamentalist-Christian epistemology. My first job as a reader, then, is to try to understand his intended purpose sympathetically.

Now I will turn to Sokal’s argument itself. First, he begins by pointing out that Christians in their everyday lives use the same first principles as everyone else, and although he never articulates these principles, I will assume that he agrees with my commonsense principles. Next, however, he claims that all religious people supplement the ordinary epistemological principles with additional ones like “This particular book always tells the infallible truth.” While I will grant that some Christians would claim the accuracy of Scripture as a first principle, many other Christians and religious people would not make that claim. I’m one of them.

Sokal then provides his critique of that supplemental principle:

But then we have the right to inquire about the compatibility of this special epistemic principle with the other, general, epistemic principles that we share. Why this particular book? Especially, why this particular book in view of the overwhelming evidence collected by scholars (employing the general epistemic principles that we all share) that it was written many decades after the events it purports to describe, by people who not only were not eye-witnesses but who also lived in a different country and spoke a different language, who recorded stories that had been told and retold many times orally, and so on. Indeed, how can one possibly consider this particular book to be infallible, given the many internal contradictions within it?

My first criticism of Alan Sokal’s argument is this: He is wrong that the scholars he points to as authorities are employing the general epistemic principles that we all share. Now, because he did not name these scholars, I will have to assume that we are referring to the same set of scholars by the conclusions that they reached.

It seems to me that one can take two positions with regard to the historical reliability of the Bible. One position is grounded in the commonsense principles I mentioned above. This position assumes that a historical document has integrity—that it attempts to relate the facts reliably—unless sufficient cause exists for suspecting the document does not have integrity as a historical document. This position, which comes from the commonsense principles I articulated above, is consistent with rationality itself.

The second position I will call a “skeptical” or “pseudo-intellectual” position. It might also be called an “objectivist’s view.” This position assumes a historical document has no integrity as a testimony to the facts unless sufficient evidence demonstrates its integrity as an historical document. This position is contrary to the commonsense principles I articulated above, and it is contrary to the foundation of human reason itself.

The academic scholars I have examined (and to whom I think Sokal is referring)—those who have concluded that the Bible is not historically reliable—have reached their conclusion from position two. Therefore, it is not surprising that they come to the conclusions that they do. But this skeptical position is not compatible with our commonsense principles. Furthermore, this approach destroys the possibility of any knowledge of history.

I, on the other hand, accept the commonsense epistemic principles. I read both Koine and classical Greek. I have been studying the Bible since the early 1980s when I became a Christian. While I do not claim the historical reliability of the Bible as a “first principle,” it is a working hypothesis. If I can be shown an error, I would abandon not only my working hypothesis but Christianity as well.

Alan Sokal next argues that science is also built on the foundation of the general epistemic principles that we all share. But that is not true. Scienceat least as it is practiced and articulated by scientistsis not built on the commonsense principles. Most of the scientists I work with (I am an ecologist with a Ph.D. in philosophy of science) would claim to be objective empirical scientists. In their view, what characterizes science is a method; and by following the method of science, they believe the results obtained are more objective and more certain than other pursuits of knowledge. Most versions of this philosophy of science come either from positivism or from philosopher of science Karl Popper (1902-1994). From positivism comes the idea that science is “unbiased measurement,” an idea rooted in the skeptical philosophy of David Hume. (While Hume was not skeptical in his everyday life, his philosophy was skeptical. His point was that philosophy cannot help us get past objective facts). From Popper comes the idea that science is “hypothesis testing.” Popper’s method is based on falsifying hypotheses—i.e., finding a case that is not true and thus disproving the hypothesis. Popper’s method is skeptical; using it, we can never prove anything true; we can only say we have tested and not rejected an hypothesis. Popper’s method also replaces intellectual judgment with a statistical test. Popper’s view of science, then, is not compatible with commonsense principles, and it is anti-intellectual, replacing human judgment with a mechanical test. His view of science, in effect, rejects human rationality. For a more complete argument, see my 2006 book, Intelligent Discourse.

My second criticism of Alan Sokal’s argument is this: His assertion that no one with integrity should accept the Bible as historically reliable because of the overwhelming evidence collected by scholars implies a rejection of commonsense principles. During the Middle Ages, the authority for knowledge also rested with the community of experts: the priesthood. I view Galileo’s (and Luther’s) claim that the authority of science (and religion) rested with the individual and not the community of experts to be one of the milestones of the modern age. I believe that Galileo’s (and Luther’s) move away from the view of the medieval Catholic Church was a good and right move; the Church’s view was wrong. Yet Sokal’s implied argument regarding authority is the same as the medieval church’s position—and thus a giant step backwards. Although, he did not address this issue in science, I suspect Sokal would hold that the authority of science rests with the community of workers through the peer-review process. Again, this is a reversion to the medieval view of authority. I do not accept this move. It is against commonsense principles.

My last criticism of Alan Sokal’s argument is a bit of a quibble, but I add it because it might help open the door to real dialogue about science and evolution. Sokal acknowledges that the Jesuit astronomers were not completely irrational in doubting the reliability of telescope operations. In that I agree with him, but my reasons are different from his. Sokal argues that the astronomers doubted the telescope’s reliability because they did not understand its workings. I, however, think that the Catholic astronomers were rational in not attributing a great deal of credibility to the telescope operations because there was little to be gained. While the telescope did help break down Aristotle’s view of the heavens (perfect spheres), it did not show parallax, which, at the time, was the greatest stumbling block to the Copernican claims because if the earth moves, then we should see that motion in the stars. The telescope evidence did not tip the scale in favor of the Copernican view.

Most scientists and philosophers of science view the Copernicans as the founders of modern science. I agree. However, most scientists and philosophers of science then go on to say that the Copernican revolution is based on the new method of observation and experience. I can’t see how that is right—at least with what I take to be the normal understanding of experience and observation. To understand my point, watch the sunrise tomorrow morning. What do you see? You see the sun move above the horizon and the earth remain stationary. That is your observation and your experience. If you are an empirical scientist (an empiricist), I do not understand how you can be a Copernican. The data of your senses tells you that Ptolemy was right. To be a Copernican, you must set aside your sense data and experience for something of higher epistemic value.

Galileo expressed this best when asked why there were so few Copernicans. He responded:

Nor can I ever sufficiently admire the outstanding acumen of those who have taken hold of this [Pythagorean/Copernican] opinion and accepted it as true; they have through sheer force of intellect done such violence to their own senses as to prefer what reason told them over that which sensible experience plainly showed to the contrary. (Galileo Dialogues, p. 328)

If science is this empirical method of observation and experience, then I fail to see how the Copernicans founded this new science. I am asking for a clear articulation of this new method of experience and observation. The Copernicans fit within my epistemic first principles, but I cannot see how they fit in Sokal’s first principles based on observation and experience.

In conclusion, I find Alan Sokal’s argument highly flawed. He claims all religious people supplement their epistemic first principles with additional flawed first principles whereas historical scholars and scientists base their research only on the epistemic first principles. He is wrong on both counts.

________________

Literature Cited:

Dewberry, Charley. Intelligent Discourse: Exposing the Fallacious Standoff Between Evolution and Intelligent Design (Eugene: Gutenberg College Press, 2006).

Galilei, Galileo. Dialogue Concerning the Two Chief World Systems (1632: reprint Berkeley: University of California Press, 1995.)

May we send you...?

Choose your subscriptions to our newsletter and/or email updates.

Subscribe

If this ministry is helpful to you, please consider supporting it as you are able. Even small donations help. Thank you.

Donate online