subscribe to our mailing list:
|
SECTIONS
|
|
|
|
Irreducible Contradiction
By Mark Perakh
First posted on September 11, 1999.
Updated in May 2002.
Contents
Introduction
Behe calculates probabilities
Additional remarks on Behe's use of probabilities
Anti-Darwin crusade
Intelligent design according to Behe
Complexity as a façade for probability
Complexity from a layman's viewpoint
What is the real meaning of irreducibility?
Maximal simplicity plus functionality vs irreducible complexity
Two facets of intelligent design
Excessive complexity
Absence of self-compensatory mechanisms
Conclusion
Appendix
Comment
1. Introduction
Michael J. Behe's book Darwin's Black Box (Simon and Schuster,
1996) subtitled "The Biochemical
Challenge to Evolution," is one of the most popular polemic publications
arguing against Darwin's theory of evolution.
Behe's book is aimed at providing an argument in favor of
the so-called "intelligent design" theory. Briefly, that theory states that
the universe, and, more specifically, life, is not the accidental outcome of a
spontaneous chain of random events but the result of a deliberate design by an
intelligent mind. Usually the proponents of the intelligent design theory do not
discuss the question of who the designer is. Sometimes they indicate (see, for
example, part 3 of W. Dembski's book Intelligent Design, InterVarsity
Press, 1999) that clarifying who the designer is, must be a task for theology. However, there is little doubt that the designer implied by the theory in
question is a supernatural mind, i.e. God.
The popularity of Behe's
book is evident, for example, from the inordinately large number of reviews
submitted by readers to the Amazon.com web site. While other books of the same genre rarely invoke more than a couple of
dozens reviews, hundreds of readers expressed
their views in regard to Behe's book, and the flow of reviews shows no sign of
abating.
Another manifestation of the strong splash made by Behe's book can
be seen, for example, in a voluminous collection of articles titled Mere
Creation edited by William A. Dembski (InterVarsity Press, 1998) in which
almost every paper contains a reference to Behe's book. The level of discourse
in that collection is uneven, but it includes some fairly sophisticated and
insightful papers in which numerous references are made to Behe's book as an
allegedly revolutionary step toward proving the intelligent
design.
We read on the cover of Behe's book opinions of some prominent
supporters of the intelligent design concept, who extol the virtues of Behe's
breakthrough on the way to the complete defeat of Darwinism and Neo-Darwinism.
For example, David Berlinski, a
mathematician known as an outspoken adversary of Darwin's theory of evolution
says: "Mike Behe makes an overwhelming case against Darwin on the biochemical
level. No one has done it before. It is an argument of great originality,
elegance, and intellectual power."
A similar view of Behe's book is evident from some of the references
made to that book by the mathematician and philosopher Dembski in his books
The Design Inference (Cambridge University Press, 1998) and the already
mentioned Intelligent Design. Professor of law Phillip E. Johnson, who is one of the most prolific propagandists for the
intelligent design theory, touts Behe's book in equally enthusiastic terms in
several of his books (for example, in Defeating Darwinism by Opening
Minds, InterVarsity Press, 1997). Hence,
there seems to be a widely shared view by people of various professional
backgrounds who are all believers in "intelligent design" and opponents of
Darwin's theory of evolution, that Behe's book provides an indisputable
argument in favor of "intelligent design" and thus against all the versions
of Darwinian hypotheses and theories.
Behe's book, while highly acclaimed by many proponents of the
intelligent design, was rather strongly criticized by the opponents of his
thesis, including prominent biologists, such as professors Kenneth Miller and
Russell Doolittle. This criticism
does not seem though to have impressed Behe who continues to publish papers in
which he shows no sign of any intention to modify his views. He repeats exactly
the same arguments time and time again, despite very serious objections to those
views from many critics.
Behe is a biochemist and his book reveals his wide knowledge of that
subject. Since I am not a
biochemist, I will not try to delve into Behe's detailed descriptions of
biochemical systems; he is in his domain there whereas I am not. I shall accept
Behe's biochemical discussion as flawless, even assuming that some other
critics, possessing a wider ken in biochemistry and related fields, could
possibly argue about some details of those biochemical descriptions. (A review of Behe's book from a biochemical viewpoint, which, in my
view, is a devastating, clearly presented and highly convincing rebuttal of
Behe's entire approach, was given by a professor of biology at Brown
University Kenneth Miller at KR Miller's review of Darwin's Black Box.)
While the biochemical descriptions take up a considerable part of
Behe's book, in his determination to offer a strong rebuttal of the evolution
theory he ventures rather far outside biochemistry, invoking certain
mathematical and philosophical concepts, and it is these excursions beyond
biochemistry that are the target of this critical review. My intention is to
show that Behe's principal concept is poorly
substantiated and in no way proves his thesis.
Before discussing in detail Behe's main concept, it seems proper to
point out that when Behe ventures beyond biochemistry, he sometimes looks more
like a dilettante than an expert. One
of the examples of Behe's dilettantism is how he discusses probabilities.
Calculations of probabilities, for example, for the spontaneous origin of life,
are very common in books aimed at disproving the theory of a natural emergence
of life. More often than not, these
calculations produce exceedingly small probabilities which lead to the
conclusion that, for example, the spontaneous emergence of life was too
improbable to be taken seriously. There
are also some opponents of the natural emergence of life who realize that the
small probability alone is irrelevant. For example,
Dembski, who is better versed in probabilities, correctly points out, more
than once, that a small probability in itself is not a proof. Therefore Dembski
suggests more elaborate criteria to decide whether an event was the result of
chance or of design. I will discuss Dembski's theory later in this paper.
I will discuss Behe's treatment of probabilities for two reasons.
One reason is to support my earlier statement that Behe's excursions beyond
biochemistry reveal his dilettantism in certain areas which are relevant to the
discussion of his main thesis. The second, and more important reason is the fact
that Behe's main thesis, his acclaimed irreducible
complexity, inseparably includes a certain probabilistic foundation.
2.
Behe calculates probabilities
Unfortunately, Behe's discussion of
probabilities is just a recital of many other similar calculations, based on an
insufficient understanding of probabilities. On pages 93 through 97 of his book, Behe criticizes Professor
Doolittle's explanation of the blood clotting sequence. The discussion is about the probability that Tissue Plasminogen Activator
(TPA) could have been produced by chance rather than by design. Behe suggests
some calculations. On pages 93-94 we read: "Consider that animals with blood-clotting cascades have roughly 10,000
genes, each of which is divided into an average of three pieces. This gives a
total of about 30,000 gene pieces. TPA has four different types of domains. By
'variously shuffling,' the odds of getting those four domains together is
30,000 to the fourth power, which is approximately one-tenth to the eighteenth
power."
First, let us note the imprecision of
Behe's statement. Obviously, 30,000 to the fourth power is a very large
number, while one-tenth to the eighteenth power is a very small number, so these
two numbers are not even "approximately" close to each other.
Probably Behe meant to say one
in 30,000 to the fourth power. This is a small mistake, but it hints at
Behe's possible discomfort with mathematics.
Indeed, he continues, as follows: "...if the Irish sweepstakes had odds
of winning of one-tenth to the eighteenth power, and if a million people played
the lottery each year, it would take an average of about thousand billion years
before anyone (not just a particular
person) won the lottery."
Behe's statement is flawed in several respects. Let us briefly discuss them.
First, Behe's example is contrived,
artificially and drastically decreasing the chance of winning, and in this way
makes his example irrelevant. The
trick used by Behe is in his selection of the numbers he discusses.
On the one hand, he estimates the probability of an event in question
(winning the Irish lottery) as one in ten to the eighteenth power. On the other
hand, he assumes that only one million people play that lottery.
One million is ten to the sixth power, which is just a tiny fraction of
ten to the eighteenth power. In
this way Behe artificially and drastically decreases the chance of winning for
anyone (not just a particular person). To clarify that observation, let us
discuss a small lottery where there are only 100 possible sets of numbers to be
chosen by players, and, accordingly, only 100 tickets are available for sale.
The probability of winning is the same 1/100 for each ticket. If all
tickets are sold and no two players have chosen the same set of numbers,
one of the tickets (we do not know in advance which one) will necessarily win.
Therefore, if all tickets are sold, the probability that some
set of chosen numbers is the winning
one is close to 100%. (If we account
for the possibility of more than one player choosing the same (losing) set of
numbers, the probability of someone [not a specific player] winning is less than
100%. The proper calculation shows however that that probability is at
least 37%. Such a calculation is given in the appendix to the article at Improbable probabilities). Now
assume that out of 100 available tickets, only ten have been sold.
Each ticket has the same chance to win, regardless of its being sold or
not. Therefore, if no two players
chooses the same set, the probability
that at least one of the sold tickets
will win is now 10% instead of 100% (which it was were all tickets sold).
This example shows that the exceedingly small probability of anyone (not
just a particular person) winning in the imaginary lottery described by Behe is
due to his deliberate choice of numbers – only one million tickets sold while
the number of potentially available tickets is immensely larger.
Obviously, Behe's example has nothing to do with a real lottery. In
every real lottery the number of tickets sold is usually close to the total
number of available tickets and the coincidental choice of the same numbers by
more than one player is very rare, therefore the probability that someone (not
just a particular person) wins is always close to 100%, and in any case not less
than 37%.
Second, Behe's discussion is irrelevant if the probability of a particular person winning is considered.
This probability does not depend on the number of tickets
sold. If the total number of
available tickets is 100, then each ticket, whether sold or not, has the same
probability of winning, namely 1/100. If,
as in Behe's example, the total number of possible events is ten to the
eighteenth power, then the probability of a particular event happening is one in
ten to the eighteenth power. It is
exceedingly small. However, it is equally
small for all possible events. If no
two players chose the same set, one set of six numbers must necessarily
win despite its probability being exceedingly small.
Therefore the exceedingly small probability calculated by Behe for the
case of TPA in no way proves his thesis and in no way rebuts Professor
Doolittle's discourse.
Third, Behe seems to assume that an event whose probability is 1/N, where
N is a very large number, would practically never happen.
This is absurd. If the
probability of an event is 1/N it usually means that there are N equally
possible events, of which some event must necessarily happen.
If event A, whose probability is very low (1/N), does not happen, it
simply means that some other event B, whose probability is equally low, has
happened instead. According to Behe, though, we have to conclude that, if the
probability of an event is 1/N, none of the N possible events would occur
(because they all have the same extremely low probability). The absurdity of
such a conclusion requires no proof.
The assertion that events having very low
probabilities do not occur was suggested by the prominent French mathematician
Emile Borel (see, for example, his book Probability and Life, 1962).
Borel suggested what he labeled The Single Law of Chance which stated
that "Phenomena with very small probabilities do not occur." He estimated
that the events that cannot reasonably be attributed to chance are those whose
probability does not exceed one divided by ten to the power of fifty. Since
Borel was a very influential mathematician who contributed many fruitful ideas
to the field of probabilities, the quoted law gained wide acceptance often
extending its meaning far beyond its legitimate implications. A little later we
will discuss some facts illustrating that Borel's law, if interpreted
literally, is absurd.
These paragraphs in Behe's book may seem
to be of minor importance, as they appear to be beyond the main theme of his
discourse. However, this item is actually closely connected to the core of
Behe's main idea, his "irreducible complexity."
The concept in question comprises two elements, one
complexity, and the other irreducibility, both being necessary parts of Behe's
idea. The question of exceedingly small probabilities calculated for the
emergence of biological structures by chance is just another facet of the
concept of complexity. Complexity
of a biological system is a necessary component of Behe's scheme, because, as
Behe's idea implies, a system of low complexity has a much better chance of
emerging spontaneously as a result of a chain of random events.
In order to build a bridge from irreducible complexity to intelligent
design, Behe must assume that the probability of his system being the result of
a random unguided process is exceedingly small.
Consider a simple example (which has
actually been given by Behe, and will be again discussed later). Imagine we see
on a table a piece of paper covered with letters. Viewing the written text we
see that it is gibberish. We don't know who wrote all those rows of letters which make no sense, or why.
Then we notice that somewhere within the gibberish text a whole word appears,
say "love." The question is,
has this word appeared in the gibberish text by chance or did whoever wrote all those
letters insert that word deliberately, i.e. by design. There is a reasonable
probability that the word in question did not result from design but happened
within the gibberish text by chance. Now
let us imagine that within the meaningless text we see a whole meaningful sentence
containing some forty letters. We estimate the probability of its occurrence by
chance as very small. The reason for that estimate is the much larger complexity
of the sentence compared with a single four-letter word.
The high complexity, according to Behe, means a low probability. This
example shows why the small probabilities calculated by Behe are crucial to his
main idea of irreducible complexity and that is why I had to analyze the
deficiencies of Behe's treatment of probabilities.
3. Additional remarks on Behe's use of probabilities
This section which is meant to complement the above
discussion of the treatment of probabilities by Behe, is based on the comments
made by Dr. B. McKay, whose contribution is thankfully appreciated.
Suppose that, according to Behe's assertion, there is
indeed only one sequence of proteins that can perform a specific function (for
example, to clot blood). Suppose further, again according to Behe's approach,
that there are no other, simpler, biological processes that could perform those
functions. Also suppose that it can
be somehow proved that higher organisms could not evolve without those
particular mechanisms (like blood clotting).
Following Behe's discourse further, suppose also that a
spontaneous emergence of the protein's sequence which is necessary to perform
the function in question, by randomly joining individual proteins, is extremely
unlikely (i.e. assuming that the probability of such an outcome of random events
is too low to expect that it could have occurred during the time of the
Earth's existence).
In other words, grant Behe all his assumptions.
The conclusion that seems to follow from all Behe's
assumptions is that the "protein machines" were not created by joining
proteins at random. This is the
first part of Behe's conclusion. However, even if we accepted that, highly
disputable part, the next part of his discourse which asserts that therefore
those "machines" must be the products of intelligent design, would pose very
serious problems.
One of the problem in question is that Behe has not
eliminated other actions of randomness
besides the simple random joining of proteins. There are many alternative
possibilities. A few examples
follow.
(1) There could exist stable protein sequences quite
similar to the clotting (or any other) sequence. Assume, for example, that there is such a stable sequence
which differs from the one necessary to perform the clotting, by, say, 5% only.
If the sequence in question is stable, it tends to hang around a long
time. Perhaps it is biologically
useful or perhaps it resulted, along a common chemical pathway, from something
biologically useful.
In such a case, we have to explain the emergence of a
sequence differing only by 5% from an existing one, by random combinations,
which is immensely easier. Those
"sequences correct by 95%" could, in their turn, have earlier evolved in
similar fashion from "sequences correct by 90%").
There is no need to assume that the whole edifice was created in a single
step from the primordial soup.
(2) Perhaps the clotting (or any other useful) sequence can
be decomposed into relatively few fragments ("bricks") which are also
parts of other biologically useful sequences.
Then the clotting sequence could have resulted from the
random combination of bricks broken off sequences that already existed.
Again, this is immensely easier probability-wise.
Suppose that we have 4 types of blocks.
Consider a particular sequence which is 100 blocks long. Estimate the
likelihood of its emergence by a random joining of blocks. There are
803469022129495137770981046171215126561215611592144769253376 of "100 blocks
long" sequences of 4 different blocks. Of
course, this number is extremely large, hence the probability of the spontaneous
emergence of the particular sequence is exceedingly small, so, having accepted
Behe's approach, we don't expect to see the right sequence to emerge any time
soon. Now suppose that each group
("brick") of 10 blocks is a stable configuration.
All of the 10 "bricks" we need can be "made" in parallel, each by
joining 10 blocks at random, which is immensely easier because there are only
524800 ways to join 10 blocks in a sequence.
Given the required 10 "bricks," there are approximately 1858 million
ways to join 10 of them together, which is also a reasonably small number for
nature. So, overall, the expected
time for the first appearance of the required 100-block sequence is reasonably
modest.
(3) There can be an adaptive search process.
Consider a field of size 100x100 meters, which contains a single pit
somewhere in it, while the overall surface of the field is slightly sloped
toward the pit. We define the
"pit" as an area of 1x1 meter. If
we want to find the pit by randomly probing points in the field, we will not be
surprised if it takes a long time. However, we can find it much faster by a
certain, still random process. Put
a drunken man at a random place in the field and allow him a "random walk."
At each time unit (say, every second) the man takes a step in a random
direction. However, downhill steps
are on average a little bit longer than uphill steps. Eventually, the man will
reach the pit. It might still take
a long time, but the expectation of the time will be much less than the
expectation of the time required using random probing at the field.
(In
computer science there are a number of optimization methods that rely on this
type of random process. There are
even some directly modeled on evolutionary processes and using the same
terminology. They are often quite
successful at optimizing functions over search spaces much too complicated for
traditional methods).
(4)
It might be true that sequences which are very close to the clotting sequence
are of no use for modern organisms, but maybe they were useful to earlier
organisms. It could be that in
early times there was some primitive organism with some primitive (but useful)
protein sequence and that the organism and sequence evolved together into
gradually more complex forms. Changes
in either the organism or the sequence could help direct the evolution of the
other, so there is no real surprise if the sequence is useful to the organism at
each point of time.
Even if options (1)-(4) can be ruled out somehow, what about some
mechanisms (5), (6), etc., that we didn't think of yet? To
assume that everything in nature happens only according to known
mechanisms would unduly limit the path to the scientific elucidation of the
unknown.
In the following sections I shall
concentrate on the main thrust of Behe's book, namely on his attempts to prove
the so-called "intelligent design" based on his concept of "irreducible
complexity."
4. Anti-Darwin crusade
The very title of Behe's book, Darwin's Black Box makes a
reader expect an effort to debunk Darwin's theory of evolution.
Of course, debunking Darwin is a common pastime of creationists.
One of the arguments favored by creationists is that the view of the
Darwinists is "just a theory." Of
course, this is true. Every
scientific theory is "just a theory." However,
the opponents of evolution favor some other concepts despite their being "just
theories" if they perceive these concepts as supporting their beliefs. For
example, many creationists happily embrace the big bang theory, which seems to
jibe well with the biblical story about creation.
Of course, the big bang theory is also essentially "just a
theory." It has no direct experimental confirmation and is based on a
sophisticated interpretation of observational data.
Nevertheless, the creationists tout this theory as a
"scientific proof" of their religious beliefs.
Darwin's theory of evolution is less fortunate in that it seems to
contradict the biblical story and, therefore,
its being "just a theory" is claimed by creationists to be its
critical flaw.
It is interesting to note that Behe never admits to being a
creationist. He suggests arguments in favor of "intelligent design" without
ever mentioning who the supposed designer could be. However, this fig leaf
covering something Behe seems reluctant to admit is quite transparent.
No wonder open creationists have accorded Behe's book effusive
accolades. There is no doubt
whatsoever that Behe's real thesis is that the creation story is fully
compatible with the biochemical evidence while the evolution theory is not.
On page 15 of his book Behe says that Darwin's theory explains the
microevolution very well, but fails
to explain macroevolution. In
this paper, I shall not dispute that statement by Behe, since this paper is not
about Darwin's theory. I shall
instead address the particular arguments suggested by Behe in favor of
"intelligent design" theory.
5. Intelligent design according to Behe
Of course, the concept of intelligent design being responsible for the
existing universe's structure in general, and for the existing forms of living
organisms in particular, was not invented by Behe.
In various forms, this concept has been discussed many times before Behe.
Behe's contribution to this discussion is in evoking the images of
immensely complex biochemical systems and claiming that the complexity of those
systems is "irreducible" and therefore points to "intelligent design."
There are many descriptions of those fascinating, exceedingly complex
biochemical systems in Behe's book. Among
those examples are the mechanism of blood clotting, the device used by bacteria
for moving (the "cilium" mechanism), the structure of human eye, etc.
All these systems look like real miracles and it is fun to read Behe's
well written discussions of those immensely complex combinations of proteins,
each performing a specific function.
The complexity of the biochemical systems had been demonstrated by
Behe in a very convincing way.
To lead the way to his conclusion about intelligent design, Behe
claims that the complexity in question is "irreducible."
This term means that the removal even of a single protein from the
convoluted chain of proteins interaction would render the entire chain
non-operational. For example,
removing even one protein from the process of blood clotting would make blood
either not clot, causing the organism to hemorrhage, or totally coagulate, also
leading to organism's demise.
From that statement Behe proceeds to claim that the alleged
irreducible complexity could not be the result of an evolutionary process and
therefore can only be attributed to "intelligent design."
Let us discuss all three steps in Behe's reasoning, namely a)
complexity, b) irreducibility, and c) attribution to intelligent design.
6. Complexity as a façade for probability
Complexity is one of the two
components of Behe's irreducible
complexity concept. Regrettably, Behe himself does not offer any definition
of what he means by complexity. Therefore, to analyze the actual meaning of his
overall idea of irreducible complexity,
we have to find clues in his descriptions of those biochemical systems he views
as being complex.
As mentioned above, Behe's concept has been touted as a
revolutionary breakthrough in the development of a convincing denial of
Darwin's evolution theory. My intention in this paper is
not to disprove the idea of an intelligent design in general, but only to test the arguments offered by Behe in favor of that
concept. I am concerned in this paper only with the validity of Behe's
arguments which have been acclaimed by the adherents of the creation as a
crushing blow to the evolutionists' views.
When discussing the concept of complexity,
we can turn to the writings of some supporters of Behe who invested considerable
effort in solidifying Behe's assertions by plugging certain obvious holes in
them, including his failure to define complexity.
In particular, it is interesting to look at a definition of complexity
offered by Dembski. To explain why Dembski's explanation should be considered
as a legitimate elaboration on Behe's discourse, let us review certain
quotations. Behe wrote a foreword to Dembski's book Intelligent Design
mentioned before. Behe wrote:
"Although it is difficult to predict (frequently nonlinear) advance of
science, the arrow of progress indicates that the more we know, the deeper
design is seen to extend. I expect that in the decades ahead we will see the
contingent aspect of nature steadily shrink. And through all of this work we
will make our judgments about design and contingency on the theoretical
foundation of Bill Dembski's work." It is a very favorable
evaluation of Dembski's work by Behe. It provides us with the confidence
that Behe fully accepts Dembski's treatment of complexity. Indeed,
nowhere can we find a single instance of Behe's disagreement with any of
Dembski's arguments.
It seems relevant to point out that Dembski's work has been
acclaimed in similar superlative terms also by other adherents of intelligent design. For example, Rob Koons, an associate professor of philosophy
at the University of Texas, writes (quoted from the cover of Dembski's book
Intelligent Design): "William Dembski is the Isaac Newton of the
information theory, and since this is the Age of Information, that makes Dembski
one of the most important thinkers of our time."
Since in this paper I concentrate mainly on Behe's book, I will not
discuss here Dembski's supposed contribution to information theory (which in
my view entails serious faults). A detailed review of that matter is offered in
a separate paper (see A consistent inconsistency).
Also, a brief discussion of informational aspects of the "intelligent design
theory" can be seen in my review of P. Johnson's books and articles, at A militant dilettante in judgement of science.
What is relevant for the discussion in this paper is that Dembski enjoys a great
deal of authority amongst the proponents of intelligent design, and therefore,
when he writes about subjects related to Behe's work, his opinions can be
viewed as authoritative expressions of that camp's position.
In Dembski's book The Design Inference we find the following
definition of complexity (page 94): The
complexity of a problem Q with respect to resources R, denoted φ (Q|R) and
called "the complexity of Q given R" is the best available estimate of how
difficult it is to solve Q under the assumption that R obtains.
In the same book Dembski provides a similar definition of
"difficulty." According to Dembski, complexity is just
"disguised probability." In his theory, complexity is reflected
in the difficulty of solving a problem, and this, in turn, is tantamount to the
small probability of finding a solution to the problem by chance. I intend
to dispute that conclusion in some of the following sections. A detailed
discussion of Dembski's theories of probability and complexity, including the
above definition, is given at A consistent inconsistency, where serious deficiencies of the above definition are revealed. For our
discussion of Behe's book it is sufficient to point out that complexity
defined by Dembski is essentially a concept quite different from complexity in
Behe's interpretation.
However critical we may be of
Dembski's definition of complexity, we possess no other definition given
by "design theorists," so we have to turn to the definition given by the
"Newton of information theory" in our attempts to interpret the gist of complexity
in Behe's theory.
When applying the above definition of complexity to Behe's
biochemical systems we see that actually there are two different notions of
complexity. Complexity as defined by Dembski is practically synonymous with a
difficulty of solving a problem. On the other hand, in Behe's scheme, the
complexity is in the structure of the biochemical system. It is determined by
the number of components of the system and the number of links and
interconnections between those components. The more components the system
includes and the more interconnections between those components exist, the more
complex is the system. The two concepts of complexity are essentially different.
However, there is a link between the two definitions. It is probability. The
harder it is to solve problem, the smaller, according to Dembski, the
probability that it will be solved by some unguided random actions.
The more complex a system is, insist Dembski and Behe,
the smaller the probability that it could have emerged as a result of
unguided random events. I submit
that more often than not the relation between the complexity of
a system (as implied by Behe) and
the probability of its spontaneous emergence is opposite to the relation assumed
by Dembski.
In the meantime, to illustrate Dembski's
logic, take a look at his example of a problem whose solution through an
unguided effort is highly improbable. It is an attempt to open a safe's lock
in a single try without knowing the correct combination. Since there are many
millions of possible combinations of dial's positions, to correctly guess the
right combination on the first attempt is highly improbable, and this
improbability, says Dembski, translates into the very high complexity of the
problem. (Actually this example was first discussed by Richard Dawkins in his
book The Blind Watchmaker, W.W. Norton & Co., 1986).
The biochemical systems described by Behe exemplify the other type of
complexity. For example, the blood clotting mechanism is based on the
interaction of many proteins, each performing a precisely defined function.
The system includes many components interconnected by multiple links.
It is very complex. The complexity implied by Behe is in the structure of
biochemical systems, i.e. in the large number of components and the large number
of interconnections among them. This complexity allegedly translates into the
very small probability that such a system could have emerged spontaneously via
random events. This is viewed by Behe and his supporters as an argument in favor
of those system being the product of intelligent design.
To summarize the above discussion, the
only aspect of Dembski's complexity that has a bearing on Behe's line of
arguments is the suggestion that the complexity of a system translates into the
very small probability of its emergence via unguided random events.
All other aspects of complexity (of which there are many) are irrelevant
for Behe's discourse. Later I will return to discussing complexity in general
and of biochemical systems in particular, from a viewpoint, ignored by Behe.
But now let us take a closer look at the very facet of complexity which
is at the core of Behe's use of it, namely its probability aspect.
First, recall our discussion of Behe's calculation of probabilities.
Can a very small probability serve as a decisive argument against the
possible occurrence of an event? As
Dembski himself admits, it cannot.
Events whose probability is exceedingly small occur every day.
Imagine tossing a die with the letters A, B, C, D, E, and F on its six
facets. Assume it has been tossed one hundred times.
After each trial we write the letter facing up on a piece of paper. The
combination of 100 letters obtained after 100 trials constitutes an event. There
are six to the power of 100 possible events, i.e. of possible combinations of
100 letters comprising six letters listed above.
This is an enormously large number, about 6.5 times ten to the power of
77. Only one particular set of
letters out of that vast number of possible sets has occurred. Whatever
combination has actually resulted from the test, it has an exceedingly
small probability, close to one divided by more than ten to the power of 77.
The denominator of that fraction is by 43 orders of magnitude larger than the
number Behe calls (page 96) "horrendously large." This fraction is by 28
orders of magnitude smaller than the lowest limit of probability for a random
event as suggested by Borel (ten to the power of minus 50). Nevertheless, the
event in question, whose probability was so exceedingly small, actually
occurred. Nobody would be surprised by the occurrence of that exceedingly
improbable event, because some combination of 100 letters must have unavoidably
happen, all of the possible combinations having an equally
minuscule probability, hence when one of those exceedingly improbable events
occurred, there was no reason for surprise.
Unfortunately, in many publications aimed at supporting the intelligent
design theory, including Behe's book, very small calculated
probabilities of events such as the spontaneous emergence of proteins, are
offered as alleged proof that such events could not happen. An
often repeated statement in such publications is that events whose probability
is so exceedingly small, just do not
happen. That statement is tantamount to the absurd assertion that nothing
happened, i.e. that no set of letters resulted from 100 tests. The indisputable
fact is that exceedingly improbable events happen all the time.
Behe's use of biological system's complexity to posit the
improbability of their spontaneous emergence without intelligent effort is
hardly convincing.
While Behe shares this misconception about the impossibility of events
whose calculated probability is very small with many other proponents of
intelligent design, another prominent advocate of the intelligent design, the
mathematician Dembski is one of the few who realize the falsity of that
assertion. Indeed, on page 3 of Dembski's The Design Inference, we read:
"Sheer improbability by itself is not enough to eliminate chance." This is
true and runs contrary to Behe's interpretation of low probabilities.
On page 130 of Dembski's Intelligent Design we read a similar
assertion, again contrary to Behe's understanding of probabilities:
"complexity (or improbability) isn't enough to eliminate chance and
establish design." These
statements are especially telltale since they are written by a man highly
regarded by proponents of intelligent design, including Behe, and who himself is
one of the staunchest "design theorists."
Since, unlike Behe, Dembski understands that very small probabilities
do not prove design or disprove chance, he suggests a more elaborate criterion,
which, in his view, enables one to empirically discover design.
Dembski's idea is expressed in the concept of what he calls
Explanatory Filter. This term
denotes a three-step scheme for choosing one of the three causes of events,
which, according to Dembski, are regularity, chance and design.
A detailed discussion of Dembski's theory is offered at A consistent inconsistency.
Let us briefly summarize Dembski's approach. Like many other proponents of
intelligent design theory, Dembski maintains that a very low probability of an
event (which he views as tantamount to its complexity) is a necessary condition
to infer design. However, unlike many other adherents of intelligent design
theory, Dembski realizes that an extremely small probability (high complexity)
of an event, even if necessary, is not
in itself a sufficient condition to
infer design. He perceives the additional condition that will make up for the
missing sufficiency in what he calls
either "specification" or "pattern."
Therefore, according to Dembski, if an event is a) highly improbable, and
b)"specified," this points to design. (Note, that if we consider the entire
three-step explanatory filter, it becomes clear the "specification" serves
as a sufficient condition only if the condition of low probability is also
present. If an event is specified but has a high probability, it does not lead
to a design inference. Only the combination of low probability and specification
provides a necessary and sufficient
condition for a design inference).
I find some points in Dembski's argument
highly disputable, including his assertion that his filter produces no false
positives. In the review of Dembski's work at A consistent inconsistency
examples are shown disproving Dembski's assertions. To see how Behe
relates to Dembski's theory, let us review a quotation from Behe's foreword to
Dembski's book Intelligent Design. Behe writes: "For
example, if we turned a corner and saw a couple of Scrabble letters on a table
that spelled AN, we would not, just on that basis, be able to decide if they
were purposely arranged. Even though they spelled a word, the probability of
getting a short word by chance is not prohibitive. On the other hand, the
probability of seeing some particular long sequence of Scrabble letters, such as
NDEIRUABFDMOJHRINKE, is quite small
(around one in a billion billion billion).
Nonetheless, if we saw that sequence lined up on a table, we would think
little of it because it is not specified – it matches no recognizable pattern.
But if we saw a sequence of letters that read, say, METHINKSITISLIKEAWEASEL, we
would easily conclude that the letters were intentionally arranged that way. The
sequence of letters is not only highly improbable, but also matches an
intelligible English sentence. It is a product of intelligent design."
The above quotation is a concise representation of
Dembski's idea stripped of its mathematical and sophisticating embellishments.
Note, that this quotation shows that Behe has abandoned the assertion made in
his own book that events of a very low probability just do not happen. Instead,
he now adopts Dembski's more sophisticated approach asserting that design must
be inferred only when there is a combination of a very small probability with a
recognizable pattern. As it is
argued in the review of Dembski's work referred to above, whereas Dembski
correctly denies the evidentiary power of low probability alone, adding
specification does not eliminate the probabilistic nature of design inference.
In Dembski's discourse, design inference is made if an event has a very
low probability and also displays a recognizable pattern.
To be recognizable, the pattern, according to Dembski, must meet two
additional conditions. One is the so-called "detachability," and the
other, "delimitation." In
its turn, detachabaility entails "conditional independence" of the
background information and "tractability." All these concepts are critically discussed in detail in the review of
Dembski's work at A consistent inconsistency.
In view of the above, how can Dembski's filter, so highly praised by Behe,
help the latter in proving the "irreducible complexity?" The answer does not seem to be very encouraging for Behe and
his supporters. There are no distinguishable "recognizable detachable
patterns" in the biochemical systems so beautifully described by Behe. Looking
at those immensely complex biochemical machines, we do not see detachable recognizable patterns as defined by Dembski, but rather
pattern that are, according to his definition, not detachable (as we have no independent background knowledge
enabling us to match the observed pattern to any sample known a priori).
Actually, given the purely subjective character of "detachability," the
question of Dembski's filter application to biochemical machines is moot.
After reviewing Dembski's concept of
complexity it seems that the only input that
concept provided to Behe's theory is that biochemical machines are highly improbable because
they are very complex. We will see later that even this statement is highly
questionable. However, even if it
were true, it would provide no new insight into Behe's concept of irreducible
complexity. Indeed, Dembski's definition of complexity
contains
no facet which could lead one to the elucidation of irreducibility
of complexity. To try revealing what can make complexity irreducible, we
have to look elsewhere. Therefore,
before discussing irreducibility, we have to talk about complexity, from a
viewpoint different from that chosen by Dembski.
7. Complexity from a layman's viewpoint
I will later return to complexity as a
mathematical concept, limiting myself in this section to some general
discussion of the intuitively understood meaning of complexity of a system.
In Behe's view, enormous complexity
(combined with its alleged irreducibility) is a sign that the system in question
must have been designed by some unnamed intelligent mind.
Is complexity indeed an attribute of an
intelligent design? Human
experience points in the opposite direction.
The simpler the solution to a problem, the more intelligence and
ingenuity it requires. The entire
history of technological progress proves that the best designs are always also
the simplest.
Let us look at a few examples.
Remember the electronic circuits which
appeared at the beginning of the 20th century.
They were based on vacuum tubes. The
simplest vacuum tube, a diode, had a number of fragile parts, soldered together
in a vacuumed vessel. A triode,
which was a necessary part of any amplifier, had several electrodes of complex
shape soldered into a glass or metallic body with a bunch of contacts
penetrating the walls of the vessel.
Remember the first computer, created in
the forties under the guidance of J. Presper Eckert and John Mauchly. This
wonderful achievement of the human mind from today's point of view seems to be
a monster. It was a huge contraption containing 18,000 vacuum tubes and 3,000
switches.
If we accepted Behe's concept, improvements in electronics and computer
design should have proceeded via increased complexity of both vacuum tubes and
circuitry. Indeed, for a while, the
increased ability of electronic circuits to perform various tasks was achieved
by increasing the complexity of both the tubes and the circuits.
Vacuum tubes with four, then with five, six, seven, etc. electrodes, were
designed. The number of tubes in a circuit increased.
The more complex the tubes and the circuits became, the slower their
performance improved, hitting a wall when the cost of the systems became
prohibitive without a substantial improvement in performance, and was
accompanied by a drop in reliability. Then, in the late forties, the transistor
was invented. A transistor is an
ultimately simple piece of a device,
incomparably simpler than a vacuum tube. Its
introduction led to the enormous simplification of electronic circuits and thus
to the vastly increased ability to perform more complex tasks.
While modern computers are much more complex than that built by Eckert and
Mauchly, were they to perform only the same tasks as the computer of the
forties, they could be immensely simpler. This
simplification has enabled the enormous progress in computations, communications
and automatics we enjoy today.
Remember another example, from a
completely different field. In the
19th century, various inventors tried to design a sewing machine.
A number of patents were granted. All
those machines were primitive, unreliable, and heavy, and inventors tried to
solve the problem by adding more parts, each designed to get rid of some of the
shortcomings of the machine but at the same time making it more complex.
In 1851, a man by the name of I. M. Singer
came up with a novel idea. His invention incorporated two elements. One was the
use of a simple shuttling hook and the other, of a simple needle of a special
shape. These two elements immediately made all the complex devices used by
Singer's predecessors unnecessary.
His machine was much simpler that any before him, while also much more
reliable and easier to use. It
became the model for further improvements by A. B. Wilson, who introduced a
swinging hook, thus further simplifying the design.
Would anybody say that Singer's and
Wilson's predecessors were more intelligent than these two inventors because
the predecessors' designs were more complex?
It has been agreed among experts in warfare that the Russian-made tank T-34,
designed by Joseph Kotin, was the best tank of WW2. It was also the simplest in
design.
The best submachine guns in their
categories are considered to be the Russian piece designed by Kalashnikov (AK47)
and the Israeli-made Uzi. Both are
also the simplest in design among all the submachine guns ever produced. The Uzi
has only seven parts, and is easily assembled and dissembled.
Many more such examples could be listed.
Now recall my statement that Dembski's
definition of complexity requires qualification.
I mentioned that Dembski's assertion, obviously supported
by Behe, that complexity is equivalent to low probability is highly
questionable. Let us now elaborate. Imagine that you have embarked on an
excursion from Rome into the Italian countryside and have lost your way. Of
course, everybody knows that all roads lead to Rome. Since you wish to be back
in Rome as soon as possible, you would like to choose the shortest road . There
are many different roads for you to choose from, but only one of them, let us
denote it S, is the shortest (i.e. the simplest) while there are many other
roads which all are more complex than S. However, you do not know which of the
roads is your most desired S. Imagine that you decide to rely on chance, say,
you assign a number to every possible road, write those numbers on pieces of
paper and then randomly pull one of the numbers out of your hat. Of course, the
probability that the randomly chosen road turns out to be S is much smaller than
the probability that the randomly chosen road turns out to be one of the more
convoluted ones, simply because there are so many convoluted roads but only one
shortest road. Imagine now that you do not rely on chance but rather decide to
approach the problem in an intelligent way.
For example, you buy a map in the nearest village and determine the
shortest road to Rome. In this case you have a very good chance of selecting the
shortest of available roads.
Hence, if you learned that your friend who lost his way in the
countryside, chose the shortest, i.e. the simplest road to Rome, you would have
a good reason to assume that he made an intelligent decision, choosing the road
by design rather than by chance. If,
though, you learned that your friend chose some convoluted, complex way, it
would rather indicate that he relied on chance.
Similarly, any task in either a mechanical or a biological system can
be performed in many ways. There are always much more of complicated, convoluted
ways of performing a task than of simple ways to do the same job.
If a machine, be it mechanical or biochemical, is very complex, it points
to its unintelligent origin. Nothing prevents a system of any degree of
complexity from emerging as a result of unguided random events. If, though, a
task is performed in a very simple way, there is a good chance a design is to
blame. The simple reason for that is that there are many convoluted ways to do a
job but only a few simple ways.
In view of the above, I submit that Dembski's assertion equalizing
complexity with small probability can reasonably be turned upside down.
The simpler the system that successfully performs a job, the smaller is
the probability that it is a result of spontaneous random events. The more
complex the system is, the less probable is its origin in an intelligent design.
Of course, if the latter statement is accepted, it completely undermines
the very core of Behe's concept.
Behe has convincingly showed that
biochemical systems are extremely complex.
That complexity, according to Behe, is one of two necessary facets
pointing to intelligent design (the other is irreducibility). Why this
complexity in itself should point to
intelligent design, remains Behe's (and his supporters') secret.
Of course, Behe considers complexity together with irreducibility, and when
combined, they, in his view, provide a strong argument in favor of intelligent
design. In one of the next sections
we will discuss the role of the alleged irreducibility of the systems described
by Behe. We have not yet completed
our separate discussion of complexity, but now we necessarily have to discuss it
together with irreducibility.
8. What is the real meaning of irreducibility?
The discussion of Behe's concept of
irreducible complexity must be twofold. On the one hand, we will have to discuss
the following question: if the biochemical systems described by Behe, are, as he
asserts, indeed irreducibly complex, does this indeed point to a deliberate
design? On the other hand, we will
have to address the question of whether or not these systems are not just
complex, but indeed irreducibly
complex.
Behe's book, while written for laymen,
also purports to base his conclusions on an approach which is more or less
scientific-like. He considers facts
established by biochemical science and proceeds to make certain conclusions
seemingly stemming from those facts. Therefore,
in our discussion we may legitimately require from Behe adherence to some
elementary rules of scientific discourse. Unfortunately, this aspect of Behe's
approach leaves much to be desired.
Behe defines irreducible complexity as the tight interdependence of the system's constituent parts such that the removal of even one of its parts renders the system non-operational. However, he does not seem to offer criteria which would enable one to determine whether or not a system in question indeed meets that definition. His definition is made in an abstract way providing no clues how specifically to find out whether the removal of a part of the system will indeed make it dysfunctional. Moreover, he did not offer proofs that the systems he reviewed are indeed irreducibly complex in his sense.
There exists a quite rigorous and quite
general definition of irreducible complexity, of which Behe was apparently
unaware, and which actually defines something quite different from what Behe
means by his term. This definition is given in the part of mathematical
statistics called algorithmic theory of probability (ATP).
That chapter of statistical science was developed in the 1960s.
Its main creators were the American mathematician R. J. Solomonoff of
Zator Co., the Russian mathematician A. N. Kolmogorov of the Russian Academy of
Sciences and the American mathematician, G. J. Chaitin of the IBM research
center. ATP is a sophisticated
mathematical theory which makes use of elements of mathematical statistics,
information theory, and computer science.
The definition of irreducible complexity
developed in ATP, while being rigorously mathematical, is quite universal and
applicable to any system, regardless of its particular nature.
It is based on the concept of randomness, for which ATP provides a strict
definition as well.
It seems easiest to explain the
irreducible complexity according to ATP by using a mathematical example and a
computer analogy, although this specific example and analogy in no way limit the
applicability of the concept in question to any system, including the
biochemical systems discussed by Behe. The following explanation will omit certain subtle points of the ATP theory, which are not necessary to understand its general idea.
Consider the following set of digits: 01
01 01 01 01….. and so on. It is obvious that this sequence is highly ordered.
It is constructed by the repetition of zeroes and ones in pairs. The size of
this sequence, depending on the number of repetitions, can be any number, for
example, one billion bits. How can
we program a computer to reproduce this sequence?
It is obvious that there is no need to tell the computer all the numbers
of which this sequence consists. It is sufficient to tell the computer the rule
which determines the sequence. The
program in question can be written in a very simple and short form, essentially
boiling down to the following instruction: Print 0,1 n times , where n can be any
number. The length of the program in question is much shorter than the length of
the sequence itself. No matter how
we increase the size of the sequence, the size of the program will always remain
much shorter than the sequence itself.
Now imagine the following sequence: 0011011000101100111010011110100011.....etc.
Such a sequence can be obtained, for example, by flipping a coin many
times and writing 1 each time the result is heads, and 0 when it is tails.
Viewing this sequence, we cannot see any
specific order in it. This set of
numbers corresponds to our intuitive concept of a random sequence. (Actually, only an infinitely long sequence can be viewed a unequivocally random, but for our discussion this is not crucial.) How
can we program a computer to reproduce this sequence?
Since there is no evident rule determining which digit must follow any
digits already known, there is no way to produce this sequence by using any
program shorter than the sequence itself. To
program a random sequence, we need to feed into the computer the entire
sequence, which serves as its own program. Hence, the size of a program that
produces a random sequence necessarily
equals the size of the sequence itself.
In other words, that disordered sequence, unlike an ordered
sequence, cannot be reproduced by means of a shorter program (we can also
discuss the problem in more general terms of algorithms instead of programs).
Again, a random sequence cannot be encoded by a program of a reduced
size, hence a random sequence is irreducible. Any
ordered sequence, on the contrary, can be encoded (at least in principle) by a
program (or an algorithm) which is shorter than the sequence itself.
Therefore, an ordered sequence is reducible.
While the above discussion is a simplified
presentation of some seminal concepts
of ATP, it can, hopefully, help to comprehend the definition
of irreducible complexity given in ATP. We
will discuss this definition after a few preliminary remarks.
Every system, including the biochemical
ones described by Behe, can be represented by a certain algorithm, or, if we
prefer a computer-related parlance, it can be represented by a program which
encodes the system. The code essentially boils down to a sequence of symbols
which can be expressed in binary digits. If
a system is not random, and hence obeys a certain rule, the encoding program in
question (or algorithm) can be compressed, i.e. made shorter (in number of bits)
than the size of the system itself, by using the rule in question.
Complexity
of a system is defined in ATP as the minimum size of a program (or of an
algorithm) that is capable of encoding the system. (Actually the minimal size can only be defined with a certain "fudge factor" involved, which, though, is not a crucial point for our discussion.) The more complex a system is,
the larger the size of the minimal
program that can encode that system. If the size of the minimal encoding program
cannot be reduced below the size of the system itself, i.e. if the minimum size
of the encoding program (or algorithm) approximately equals the size of the
system itself, the complexity of such a system is defined as irreducible.
Note, that the definition
of complexity in ATP is very different from the definition given by
Dembski. The latter defined complexity in terms of the difficulty in solving a
problem and identifies complexity as low probability.
Dembski's definition provides no clue as to what can make complexity
irreducible. The definition of complexity in ATP
is a definition of the complexity of
a system per se rather than of the
difficulty in its reproduction. We
will see later, that ATP complexity has a relationship to probability which is
opposite to that of Dembski's complexity.
The basic definition relevant to our
situation is then as follows: a system is irreducibly complex if the minimum
size of a program that is capable of encoding the system approximately equals
the size of the system itself. On
the other hand, if a system is not random, there exists (at least in principle)
a rule prescribing the structure of that system.
Using that rule, an encoding program
can be designed (at least in principle) which is much shorter than the system
itself, as this system is represented by an ordered sequence of binary digits.
Hence, a very important consequence of the
basic theorems of ATP is as follows: if
a system is indeed irreducibly complex, it
is necessarily random. (The proper term is quasi-random, because, strictly speaking, only infinitely large system can be unequivocally determined as random, but this subtle point is not crucial for this discussion.)
In other words, ATP has established that
irreducible complexity is just a synonym for (quasi) randomness.
Whatever examples of biochemical systems Behe can come up with, he cannot eschew
the indisputable mathematical fact: if a system is indeed irreducibly complex,
it is necessarily random. Of course, any system that is the result of
intelligent design (or even of an "unintelligent" design) is, by
definition, not random. The
unavoidable conclusion: if a system is indeed irreducibly complex, it cannot be
the product of design!
We see that if Behe wished to stick to his term of irreducible complexity, his
entire explanation, which suggests that biochemical systems are irreducibly
complex and therefore must be products of a design, would make no sense.
In terms of ATP, however, biological systems are never irreducibly complex.
Indeed, a tiny seed contains the entire information necessary to grow an oak.
The complexity of an oak is reducible to the much smaller program encoded
in a seed.
While Behe's term is a misnomer, and no biological system is irreducibly
complex in terms of ATP, we can just say that Behe simply has not chosen his
term well. Does his concept,
mislabeled irreducible complexity, have nevertheless some meaning different from
the term of ATP?
Reviewing Behe's multiple examples of biochemical machines, we can see that
what he actually implies by his term, is the interdependence of all the
components of a biochemical machine, such that the removal of any element of it
renders it dysfunctional. We will
have to discuss whether or not biochemical systems are indeed characterized by
such a tight interdependence of all of their constituents, as suggested by Behe,
and whether or not, if they indeed are, this indeed points to intelligent
design.
9. Maximal simplicity plus functionality vs. irreducible complexity
Let us approach the problem of the
connection between complexity and design utilizing an analogy to the famous
watchmaker argument. In that
argument, one is asked to answer the following question: if you found a watch,
would you believe that it was a result of a spontaneous natural process or that
it was designed by a watchmaker? Of
course, the answer is unequivocal, and everyone agrees that a contraption which
performs a well defined function must be the product of intelligent design. Let us
analyze, what feature of that watch led to the conclusion that it was a product
of design? Was it the watch's complexity?
To answer the last question, let us formulate
the problem a little differently. Suppose
you sit at a beach and pick various pebbles. Most of them have irregular shape,
with a rough surface, with their color varying from spot to spot, and
their density also varying over its volume. Suppose that you come across one
particular piece which, unlike all other pebbles, is of a perfectly spherical
shape, its color and density perfectly uniform all over its volume and its
surface polished mirror-like. Obviously, the rational conclusion is that the
perfectly spherical piece is an artifact, a result of an intelligent effort,
including design, planning and a set of actions aimed at achieving the goal of
producing that perfectly uniform ideal sphere. While we don't know the purpose
of the designer of that spherical artifact, we have to admit that its
spontaneous appearance is unlikely. Any other piece of pebble, with its
irregular shape, is, more likely, a result of some spontaneous natural process.
Now, the spherical piece is extremely simple and can be described by a
very simple formula requiring only two numbers, the diameter and the constant
density plus naming its color. The
full description of that spherical artifact requires a simple program of a small
size. Any other piece of pebble with its complex shape and a non-uniform
distribution of density, color and surface roughness, cannot be described by a
simple program, but rather by a much more complex one, containing many numbers.
This example again illustrates that
complexity in itself is more likely to point to a spontaneous process of random
events while simplicity (low complexity) more likely points to intelligent
design. This is in full agreement
with the definition of complexity given in ATP but contrary to the definition of
complexity given by Dembski. Regarding
the ideal sphere, its complexity in terms of ATP, that is its complexity as a
system, is very small. However, the
probability of its spontaneous emergence is also very small, which is opposite
to the relation between Dembski's complexity and probability.
In Dembski's terms, the simpler a system is, the larger
its probability. A system
which is simple in ATP's sense, but fully functional, is complex in
Dembski's terms, since its probability is small.
A system that is simple in ATP's sense and also fully functional more
likely points to design than to chance. This
conclusion is contrary to Behe's concept which attributes large complexity to
design.
In fact, our conclusion about the likely origin of a watch was based not on its
complexity, but rather on its functionality.
The watch performs a definite task, and that gives rise to our conclusion.
Nothing prohibits a very complex system from emerging as a result of
random events. Functionality is
what seems to point to intelligent design. In the case of an ideal sphere,
we inferred design not because of the sphere's complexity, but because of its
obvious artifactuality (the term introduced by D. Ratzsch, for example in
his book Nature, Design and Science, State University of New York
Press, 2001).
If we analyze the examples given by Behe, we have to conclude that his thesis
was not about irreducible complexity
but rather about the functionality of
biochemical systems, or, more specifically, about a strict interdependence of
the system's components, each of them being necessary for the system to
properly function. In
Behe's often discussed example of a mousetrap, the feature relevant to the
discussion was not the trap's complexity or irreducibility.
The indication of design was
the trap's functionality, its ability to perform a certain task by means of a simple
combination of parts.
It is easy to provide examples of systems which are even much simpler but
nevertheless meet Behe's actual rather than proclaimed formula.
One such example was given in a paper printed in the collection Mere
Creation by the already mentioned mathematician Berlinski, who strongly
supports the concept of intelligent design, but is obviously aware of
weaknesses in Behe's position. Imagine
a regular chair with four legs. Of
course, it is a very simple system. Cutting
off a leg renders the chair unusable. Hence,
according to Behe's actual formulation, which he strangely seemed not to
realize himself, that chair meets the requirement of what Behe unduly labeled
"irreducible complexity."
Now, since complexity in itself more likely points to a spontaneous chain of
random steps rather than to intelligent design, what are the features of a
system which would point to intelligent design?
It is a combination of simplicity with functionality.
Hence, Behe's definition, first, should have been turned upside down
(simplicity instead of complexity) and, second, complemented by one more
necessary component - functionality.
If we discover that a system performs a certain task, it can (but not necessarily does)
point to a design. The simpler the
system in question, and the better it performs a certain task, the more likely
is its origin in intelligent design. The
more complex the system performing a certain task, the less likely is the
suggestion of intelligent design. Hence, if Behe, and with him his followers, want to test whether a system
was likely created via intelligent design, they must have in their possession
criteria which would enable them to determine whether the complexity of the
system performing a certain task is close to the minimally possible while
preserving its functionality. Behe did not suggest nor did he apply such
criteria to the biochemical systems he described.
Therefore his assertion that those systems were "irreducibly complex"
(which actually should be redefined as "most simple but functional") was not
substantiated. The sheer complexity
of the biochemical system is rather an argument against the probability of
intelligent design, especially if it is not shown that their complexity is close
to the minimum possible while preserving functionality.
As
an example of an irreducibly complex system, Behe refers to a simple five-part
mousetrap. A mousetrap can be constructed in various ways.
What is missing in Behe's discourse, is proof that the particular
design of a trap he described is indeed close to being as simple as possible.
Moreover, it is easy to demonstrate that the five-part mousetrap described by
Behe can easily be reduced to a four-part, three-part, two-part, and finally a
one-part contraption still preserving the ability to catch mice.
(A nice example of four- , three-, two- ,
and one-part contraptions each capable of catching mice, can be seen at John
McDonald's website at A reducibly complex mousetrap). Alternatively, a mousetrap could be built in steps, starting from a
one-part version, and adding more parts, improving the mousetrap's performance
at each step. Behe should have thought more carefully about the example allegedly
illustrating his thesis.
10. Two facets of intelligent design
Note that the concept of intelligent
design is twofold. It comprises,
first, the idea of design, and,
second, the idea of that design being intelligent.
Just for the sake of illustration, imagine a situation which is
intentionally simplified to the extreme, and not intended to be viewed as
realistic. Hopefully readers will forgive the obvious frivolity of that example.
Assume that on a certain planet X the civilization developed without developing
chairs, so the inhabitants of that planet, if they wished to sit down, had to do
so by sitting on the ground. Imagine
further that an idea of a chair took hold and a competition was announced for
inventing a comfortable chair. Now
assume that among the submitted proposals were chairs with various numbers of
legs. Of course, all of those chairs, once made, would be the results of design.
However, not all of them would qualify to be viewed as designed
intelligently.
For example, chairs with only one or two legs attached at the
corners of the seat would be impractical and therefore their design would be
rather viewed as imbecilic. A chair
with three legs would be the most intelligently designed since it would combine
a reasonable level of comfort with the best stability, least dependent on the
flatness of a floor. The design of
a three-legged chair would deserve the definition of intelligent design.
A four-legged chair would have the drawback of being less stable on an
imperfectly flat floor.
However, a four-legged chair may win as a little more
comfortable. Hence, both the
three-legged and four-legged designs could be reasonably viewed as intelligent.
(Since this example is somehow on a facetious side, we ignore multiple
possible variations of seats, such as chairs without legs, chairs hung from
ceilings, sofas, etc.) Assume now that among the submitted designs were five,
six, and seven-legged chairs.
Now imagine that the inhabitants of X were not among the smartest people
in the universe, so they chose to opt for seven-legged chairs (maybe in their
religion number seven was also supposed to have a special meaning).
Assume that a visitor from another planet, Y, where still no chairs had
been in use, came to X and saw the seven-legged chairs. The visitor, who never
saw any other type of chairs, might admire the seven-legged chairs as an
amazing invention. Since he never saw four- or three-legged chairs, he might
believe that all seven legs were necessary. If that visitor happened to be a
disciple of Behe, and had never had a chance to try a four- or a three-legged
chair, he could conclude that he saw in a seven-legged chair that famous
irreducible complexity. This, in its turn, might lead him to the conclusion that
a seven-legged chair was a result of intelligent
design. He might never suspect that the design of that chair was in reality not
very intelligent, but rather based on excessive
complexity. (We will additionally discuss excessive complexity in a
following section).
Similarly, many biochemical systems described by Behe could very well
be excessively complex. If
they are excessively complex, it can be ascribed either to an unintelligent
design, or to a chain of random steps of evolution.
I don't think anybody would entertain as reasonable the idea of an unintelligent
design on a cosmic scale. Hence,
unless there are proofs that the complexity of a system is not excessive, its
complexity more likely points to randomness than to intelligent design.
At the beginning of this article we quoted
David Berlinski, a mathematician who highly commended Behe's book (in rather
general terms). It is of interest to look at another quote where Berlinski
relates to specific points in Behe's book, using a mathematical approach. On page 406 of the collection
Mere Creation Berlinski writes: "The definition of irreducible
complexity makes strong empirical claims. It is foolish to deny this as well to
suggest that these claims have been met. The argument having been forged in
analogy, it remains possible that the analogy may collapse at just the crucial
joint. The mammalian eye seems
irreducibly complex; so, too, Eucariotic replication and countlessly many
biochemical systems, but who knows?"
Indeed, who knows?
This statement, rather devastating to Behe's theory, is especially
impressive since it comes from a man who is regarded by the proponents of intelligent
design as one of their most accomplished mathematicians and a staunch
anti-Darwinist. Berlinski's statement reveals one of the weakest points of Behe's position – the
absence of proof that the biochemical systems he describes indeed
possess what he mislabeled irreducible
complexity, but which actually is a tight interdependence of
elements and an accompanying lack of compensatory mechanisms
(see next sections).
Behe's discourse provides no proof that irreducible complexity (in his
sense of the term) is indeed present in cells, it provides even less indications
that the systems in question are irreplaceable, that is they cannot be replaced by another system,
which would perform the same function, possibly in a more efficient way.
At least two features, if present in a
system, speak against the hypothesis of intelligent design. One is excessive
complexity, and the other, the absence
of self-compensatory mechanisms.
11. Excessive complexity
We have established so far that if the irreducible complexity (either in ATP or in Behe’s sense) is indeed present, it rather points (for two different reasons) to the absence of design. However, this does not mean that if complexity is not irreducible, it must point to design. The complexity that is reducible (in Behe’s sense) can be justifiably viewed as excessive, and as such it also points to absence of design. Another term for it is redundant complexity (Niall Shanks and Karl H. Joplin, Redundant Complexity: A Critical Analysis of Intelligent Design in Biochemistry. Philosophy of Science, 66 (2) (June 1999): 268).
The proposition of the excessive complexity is not just a logical conclusion. There exists a direct experimental evidence pointing to the excessive complexity of some biochemical systems. Moreover, one such evidence relates precisely to that mechanism of blood clotting which Behe had chosen as an example of what he named irreducible complexity. In the 1990s, biochemists learned how to "knock out" individual genes from an animal's genome. In the paper by a group of researchers headed up by Bugge (Cell, v. 87, pages 709-719, 1996) the results of an ingenious experiment have been reported. These researchers succeeded in removing from a group of mice the gene which was instrumental in producing fibrinogen, a protein necessary for blood clotting. These mice lost the ability to clot blood and suffered from hemorrhage. In another group of mice, the researchers "knocked out" the gene responsible for production of plasminogen, the protein ensuring a timely cessation of blood clotting and hence preventing trombotic problems. As it could be expected, the mice without plasminogen had serious trombotic problems. However, when both groups of mice were crossed, the issuing generation, that had neither fibrinogen nor plasminogen, turned out to be normal for all intents and purposes. This experiment has shown that what Behe described as irreducible complexity of the blood clotting system, was actually excessive complexity, since the removal of two proteins (rather than of one only) from the system resulted in some alternative mechanism taking over.
As professor Russell Doolittle, who is a prominent microbiologist, an expert on blood clotting, wrote referring to Bugge et al (in Boston Review, v.22, No1, p.29, 1997) "Music and harmony can be achieved also with a smaller orchestra."
In a paper published in the collection "Science and Evidence for Design in the Universe" (Ignatius Publishers, 2000) Behe responded to Doolittle and argued that the experiment by Bugge at al did not actually prove what I call excessive complexity. Since I am not a biologist, I will not delve into the arguments offered by Doolittle or Behe. We see here a dispute between two microbiologists, so the problem for us, laymen, is whose view to trust. However, I could not fail to notice a peculiar feature of Behe's argumentation. In the mentioned paper he claimed that, after having considered Behe's rebuttal, Doolittle conceded being wrong in his interpretation of Bugge's results. To my inquiry, professor Doolittle categorically denied ( in a private message) having ever given a reason for Behe's claim. Contrary to Behe's claim, Doolittle strongly adheres to his original view. When I see a method of discussion in which to one's opponent is attributed something he never said, as seems to have been done by Behe in this case, I am inclined to doubt the rest of Behe's assertions as well.
Many more examples of excessive (or redundant) complexity of biological systems are known (see, for example the paper by Niall Shanks and Karl Joplin referred to earlier in this article).
This discourse shows that the enormous complexity of events in a cell, so amply demonstrated by Behe, very often can be referred to as excessive complexity. Therefore the conclusion that the complexity in question can only be attributed to intelligent design is unfounded. On the contrary, the complexity in question is quite often an excessive complexity, which points to its being a result of random events rather than that of deliberate design.
12. Absence of self-compensatory mechanisms
Excessive complexity is not the only argument against intelligent
design. The strict interdependence
of all elements of a biochemical system,
even when it is not excessive, while not contradicting a possibility of design,
clearly speaks against it being intelligent.
Indeed, intelligently designed
machines are expected to have
built-in self-compensatory resources. If unforeseen circumstances render some
elements of the machine dysfunctional, the self-compensatory mechanism
automatically takes over the damaged function.
If having bought a car we discovered that its designer failed to provide
space and a holder for a spare tire, we hardly would praise the designer's
intelligence. Without a spare tire, each time a tire blew, it would render the
entire vehicle dysfunctional, precisely as the removal of or damage to a single
protein allegedly renders the entire biochemical machine dysfunctional according to
Behe's concept of "irreducible
complexity". The very essence of Behe's mislabeled irreducible
complexity implies the absence of self-compensatory mechanisms in
biochemical machines. If the
removal (or damage) of a single protein indeed makes the entire machine
dysfunctional, as Behe asserts, it is a very serious fault of the alleged
designer, whose intelligence immediately appears suspicious.
Since, again, the hypothesis of a stupid designer acting on a cosmic
scale is hardly satisfactory for any "design theorist," the constitution of
biochemical machines, if they indeed are as described by Behe, is quite a strong
argument against intelligent design.
(Biologists tell us, however, that
biochemical systems do actually possess redundancy enabling them to compensate
for the malfunction of certain parts of the protein machines. If that is
true, then the systems so picturesquely described by Behe are hardly irreducibly
complex.)
I can envision a counter-argument accusing me
of not leaving room for intelligent design at all. Indeed, if, in my view,
either the absence of self-compensatory mechanism or the irreducible complexity
(in the ATM sense) both point to a random chain of events rather than to
intelligent design, isn't this self-contradictory?
My answer to such an argument is as follows:
First, my task is not to suggest a criterion of intelligent design but
rather to test the validity of Behe's arguments. In my view (see also A consistent inconsistency) the design inference is necessarily probabilistic.
If we see a poem or a novel, we have no difficulty in attributing it to
design because design is in this case overwhelmingly more likely than emergence
of a long meaningful text as a result of random unguided events.
Our inference in this case is based on our ken, as we have extensive
experience with texts written by
men and can easily recognize them. On the other hand, when we deal with a
biological system, our probabilistic estimate cannot be based on our ken because
we don't know in advance what the system in question must look like to be
attributed to design. If a
biological system is indeed "irreducibly complex," either in Behe's or in
the ATP's sense, in both cases
this is compatible with the assumption of its origin in a random process but
(probabilistically) speaks against the design inference. If a biological system
possesses redundancy which serves as a self-compensatory mechanism, this is
equally compatible with both design inference and the absence of design, but in
this case Behe's assertion of "irreducible complexity" is contrary to the
facts. If a biological system had no built-in self-compensatory
mechanisms (as Behe suggests) this is an argument against the intelligent
design (although not necessarily against design as such).
Therefore Behe's entire concept hangs in
the air. It does not provide a
reasonable argument in the design vs.
chance controversy.
I
don't know arguments which would decisively disprove the suggestion of
intelligent design. However, Behe's
discourse, in my view, adds no valid argument in favor of intelligent design.
13. Conclusion
In this paper, I have omitted discussion of many secondary points and details in
Behe's book, concentrating only on his main argument in favor of
"intelligent design," the concept he calls by a misappropriated term
of "irreducible complexity" of biochemical machines.
Behe maintains that the alleged
irreducible complexity of the systems in biological cells could not emerge
spontaneously and therefore must be attributed to
intelligent design. Although
Behe avoids naming the alleged designer, there is little doubt who this designer
is supposed to be.
Without discussing the question of whether
there was a divine designer or the
biological systems are the result of a very long process of random interactions
between molecules combined with a non-random natural selection, I have
suggested in this paper that Behe's argumentation is flawed.
Let us briefly summarize the main points
of our discussion.
1) The very term "irreducible
complexity" has been misappropriated by Behe since, before Behe used it, it
has been rigorously defined mathematically but denoted a different concept.
If any system reviewed by Behe happened to indeed be irreducibly complex,
according to the proper definition of that concept, it would mean that system is
random, and, hence hardly the product of design.
2) An inseparable part of Behe's concept
is the complexity of biochemical systems. That
complexity itself, however, points not to an intelligent designer but rather to
a chain of unguided random events.
The probability of a spontaneous emergence of a complex
system performing a certain function is much larger than the probability of the
spontaneous emergence of a system which performs the same function in a simpler way. (The
simpler the system capable of performing a certain function, the more complex
the problem of creating such a system, and hence, the less probable its
spontaneous emergence).
3) Biological systems are never
irreducibly complex in the mathematical sense of the term. Their programs are
intrinsically reducible to shorter sets of instructions contained in embryos, or
seeds, or the combinations of spermatozoids and eggs, etc.
4) Since there is no proof that any of the
systems described by Behe is indeed irreducibly complex (in Behe's terms),
many of them may be excessively complex, which is an argument against
intelligent design.
5). If any biochemical machine is indeed
irreducibly complex (in Behe's terms) it means it lacks compensatory
mechanisms and is highly vulnerable to any accidental damage to a single protein
which would render the entire system dysfunctional.
Such a structure of a biochemical machine, if it is indeed as described
by Behe, points to a lack of intelligence of the alleged designer, and, hence,
rather to the absence of a designer.
In view of the above, I submit that
Behe's book and his theory of irreducible complexity add no valid arguments to the
discussion of the "evolution vs intelligent design" controversy.
Behe's (and his supporters') rejection of Darwin's theory is
limited to pointing to aspects of that theory which have not yet been
sufficiently explained or understood. Every
scientific theory is incomplete and fails to explain some facts.
This does not negate the theory's positive features.
Newton's mechanics fails to explain, for example, the behavior of
elementary particles. This does not
mean Newton's theory must be rejected. Indeed, this theory is extremely
useful, for example, in planning the flights of spacecrafts where its precision
is amazingly good. Darwin's theory (or Neo-Darwinism in its various forms)
like any scientific theory, may be correct in some respects and weak in some
other.
We can't demand of any
theory answers to all questions about the evolution, and even less about
the origin of life. Moreover, the progress of science may indeed reveal that Darwin's
theory contains more weaknesses than truths. This does not seem very likely,
though, since the evolution theory certainly contains many empirically verified
elements and to dismiss it, as some creationists actively try to do, would be a
regrettable loss. Therefore,
attempts to overthrow Darwin's theory, as Behe, Dembski, Johnson and their
cohorts do with gusto, on the basis of often dubious and sometimes even
obviously incorrect notions is not a fruitful way to search for truth.
14. Appendix (posted on May 18, 2002)
In his reply to John H. McDonald Behe insists that McDonald's counter-example with the reducible mousetrap is not good because when the trap is improved by adding new parts, each time this is done by intelligent design. While the statement itself is correct, it is meaningless in the context of the dispute. Using his argument, Behe displays either deliberate or inadvertent inconsistency. Indeed, when, in his foreword to Dembski's book Intelligent Design Behe discussed the difference between the phrase METHINKS IT IS A WIESEL and a string of gibberish of the same length, he maintained that the phrase from Hamlet was a product of intelligent design while the gibberish was due to chance. However, obviously the string of gibberish he provided was deliberately created by him to illustrate his point and was therefore a product of design not any less than the phrase from Hamlet. It was a product of design in precisely the same sense as less-than-five-parts mousetraps were deliberately designed by McDonald. In both cases the examples were deliberately designed to illustrate certain notions. Behe, though, inconsistently attributes McDonald's reduced mousetraps to design but his string of gibberish, to chance. To be consistent he should have viewed both cases in the same way. If McDonald's partial mousetraps are viewed as designed, then the string of gibberish must also be viewed as designed. If, though, the string of gibberish is viewed as chance-generated, then McDonald's mousetraps must be viewed in the same way. Indeed, although McDonald has indeed designed his mousetraps, it was done only to illustrate his thesis and did not imply that his model represented the biological reality. Neither did Behe's model of a five-part mousetrap. (It is sufficient to point out that mousetraps cannot replicate so, unlike biological systems, cannot be a result of evolution). The string of gibberish was designed by Behe to illustrate how a product of chance could look like. McDonald's series of mousetraps was designed in the same way, to illustrate how a system could have gradually developed from a simplest version through intermediate steps to the final five-part construction. What caused the development in question, conscious design or blind forces of evolution, was irrelevant for the purpose of McDonald's example. The latter, while not pretending to reflect the entire complexity of the actual process of evolution, nevertheless very well illustrates that Behe's mousetrap is not irreducibly complex in Behe's sense. Contrary to Behe's assertion, out of the five parts of the mousetrap, four can be removed one by one and the remaining contraption still can be made suitable for catching mice, i.e., preserving the functionality of the device although on a lower level of fitness. The trap also can be started as a one-part device and gradually improved by adding more parts enhancing its ability to catch mice at every step. This is a good illustration of the evolution process, which is necessarily simplified in order to exemplify just one of the features of the real biological evolution. While Behe's model is hopelessly inadequate since it contradict his main thesis of irreducible complexity, McDonald's model, although largely simplifying the evolutionary process, is quite adequate to illustrate some basic points of that process by showing how a good mousetrap can be built step-by-step, improving its fitness at each consecutive step. For this illustration, it is of no significance that in biological evolution the development of ever improving organs is not due to design but is governed by the powerful forces of natural selection while in McDonald's example he consciously changed the construction from step to step. What McDonald did consciously with his series of rather large steps of improving the mousetrap, is done in biological organisms very slowly but inexorably by the unconscious and undirected forces of natural selection. No casuistry of the type used by Behe can rebut the adequacy of McDonald's example.
15. Comment on May 26, 2002
Recently, John McDonald updated his example of a gradually developing mousetrap. His updated version offers an impressive rebuttal of Behe's unsuccessful attempt to dismiss the relevance of McDonald's original version of the series of mousetraps. In the new version, McDonald shows, instead of only four, rather large steps in the gradual development of Behe's five-part mousetrap, a series of much smaller steps, starting with a simple piece of a bent wire which could serve as a primitive and not very reliable mousetrap, but, which, as McDonald states, is still better than no mousetrap at all. In the process of improving the mousetrap, McDonald shows how some parts which are gradually added to improve the trap's performance, first are optional, but as other parts are added, the optional parts become necessary for the proper functioning of the device. This illustrates the evolutionary process in biology (while McDonald warns that his example is not a real picture of biological evolution but just an analogy for the sake of illustration). McDonald's new example is animated. It decisively lays to rest Behe's claim about the irreducible complexity of his mousetrap as a model of irreducible complexity of a cell. Of course, we can expect attempts on Behe's and his defender Dembski's part (see the paper A Free Lunch in a Mousetrap on this site) to stubbornly reject McDonald's spectacular example. There is little doubt, though, that such attempts will be as meaningless as the incoherent quasi-arguments that have been offered by Behe and Dembski as allegedly salvaging Behe's model until now.
Slightly different versions of this article have been printed in
Russian in the journal Kontinent, No 107, 2001, published in Moscow, Russia, and
in the journal Vremya Iskat, No 5, 2001, published in Russian in Jerusalem,
Israel.
Mark
Perakh's main page
|
|