subscribe to our mailing list:
|
SECTIONS
|
|
|
|
Wrongly Inferred Design
By Richard Wein
Posted April 25, 2004
When claiming to having scientific evidence of Intelligent Design (ID), one
of the arguments most often cited by ID proponents is that of Dr William
Dembski, based on a method that he calls the Design Inference. Not only is this
method in itself claimed to provide support for the ID position, but it is also
said to provide the probability theoretical underpinnings for the Irreducible
Complexity argument of Dr Michael Behe. Thus, the Design Inference plays a very
important role in the ID movement's claims to scientific respectability. If the
Design Inference is fatally flawed--as this article will show it is--then the
position of the ID movement is seriously weakened.
For Dembski's most detailed description of the Design Inference, the reader
should refer to his monograph "The Design Inference: Eliminating Chance Through
Small Probabilities" [1], hereafter referred to as TDI. This is an expensive and
somewhat technical volume, for both of which reasons it may be inaccessible to
the lay reader. However a briefer description of the Design Inference can be
found online in Dembski's article "The Explanatory Filter" [2].
There have been a number of previous critiques of the Design Inference, but I
feel these have overlooked some significant points. Furthermore, there has been
a great deal of confusion about the precise nature of the Design Inference,
which stems primarily from the high level of equivocation and obfuscation in
Dembski's work. This article will attempt to clarify the nature of the Design
Inference and demonstrate the following fundamental flaws:
1. Dembski has failed to solve the problem of specification. 2. Dembski has
failed to make clear the precise nature of the Design Inference. Once clarified,
it will be seen to be either trivially simple or invalid. 3. Dembski has never
provided any data to support his claim that the Design Inference has
successfully detected design in biological structures.
1. The Problem of SpecificationA major part of the Design Inference is
"...eliminating chance through small probabilities" (to quote the subtitle of
TDI). The tool by which Dembski tries to achieve this aim is his Law of Small
Probabilities, which states that "...specified events of small probability do
not occur by chance" (TDI, p 5). The issue of specification is crucial to this
method. A specification made before observing the outcome of an event is
equivalent to the "rejection region" of conventional statistical tests. Where
Dembski departs from conventional statistics is in claiming to have a method of
formulating a valid specification after observing the event. If this claim were
well-founded, then Dembski would have made a revolutionary contribution to
statistical theory.
The problem with formulating a specification after observing the outcome of
an event is that one can almost invariably find a small-probability rejection
region which includes the observed outcome, by deliberately matching the
rejection region to the outcome, and thereby reject the chance hypothesis in
question. Such a contrived rejection region is called a "fabrication" by
Dembski. To quote Dembski: "Specifications are the non-ad hoc patterns that can
legitimately be used to eliminate chance and warrant a design inference.
Fabrications are the ad hoc patterns that cannot legitimately be used to
eliminate chance" (TDI, p 13).
Dembski claims to have a reliable method for distinguishing between
specifications and fabrications, which he describes at some length in TDI.
However, he gives us no theoretical justification for this method. Indeed, in a
reply to criticism from Eells, he claims that no such justification is needed,
because he is merely "...offering a rational reconstruction of a common human
activity" [3]. This seems to be a reference to a rather inscrutable passage at
the top of page 147 of TDI. But why should we accept that this is a
reconstruction of how we actually think when we intuitively infer design? Surely
a reconstruction must be based on empirical evidence, and Dembski offers us
none. With neither a theoretical justification nor any empirical evidence, all
we have is Dembski's intuition, and he fails even to support his intuition with
any fully worked-out examples of successful applications of the Design
Inference. (In the Caputo case, his most detailed example, he doesn't establish
a probability bound, so we don't know if the calculated probability is small
enough to infer design.) To show that Dembski's intuition is wrong, I will
present two counterexamples, in which patterns satisfying his criteria for being
specifications actually turn out to be fabrications.
My first counterexample is based on one of Dembski's own examples, the Caputo
case. Dembski wishes to test the chance hypothesis that the ballots were
selected at random (with equal probabilities of Democrat or Republican being
selected), and proposes the following specification: "The Democrats got the top
ballot line at least 40 times." In TDI (pp 166-167) he claims that this pattern
fulfills all his requirements for being a specification. However my contention
is that this pattern is in reality a fabrication, because it has been selected
to fit the observed event. To see this more clearly, let us suppose--contrary to
fact--that the 41 ballots in question had been headed by Democrats and
Republicans in strict alternation, or that all of them had been headed by a
Republican candidate. In place of the historical sequence
DDDDDDDDDDDDDDDDDDDDDDRDDDDDDDDDDDDDDDDDD we might have seen
RDRDRDRDRDRDRDRDRDRDRDRDRDRDRDRDRDRDRDRDR or
RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR.
While these hypothetical outcomes would not have favoured Caputo's party, as
the historical one did, they should still have led an astute observer to suspect
that the draws were not random (i.e. were not in accordance with the chance
hypothesis). We can speculate on reasons why Caputo might have engineered such
sequences: in the first case, perhaps he deliberately alternated the ballots out
of a misguided desire to be seen as unbiased; in the second case, perhaps he was
secretly in the pay of the Republican party. But such speculations are
unnecessary. The fact is that, if one of these outcomes had been observed, we
would still have had grounds to conduct a test of the same chance hypothesis,
but we would have selected a different specification. Hence, Dembski's
specification is actually an ad hoc fabrication selected to fit the observed
outcome. A genuine specification should have included all outcomes that were as
noteworthy (in some sense) as the one actually observed. By excluding some such
outcomes, Dembski has made the specification too narrow, and thus calculated too
small a probability.
For my second counterexample, let's say that I'm sent a computer text file
which contains just one English word, say "design". I wish to test the chance
hypothesis that the characters in the text file were generated at random. To be
precise, I'll take as my chance hypothesis: "randomly selected characters were
added to the file one at a time until an end-of-line character occurred." I
propose the specification: "any English word." This pattern meets Dembski's
three criteria for a valid specification, namely:
- CINDE. As my "side information", I take the contents of an English
dictionary. (For simplicity, let's not worry about unlisted variations in word
endings, such as plurals.) My selection of a dictionary has no effect on the
probability of receiving the word "design", given the chance hypothesis, so the
conditional independence criterion is met. Note that Dembski's formulation of
this criterion is confusing. It is stated formally as "P(E|H&J
P(E|H) for
any information J generated by I" (TDI, p 152). But, if the chance hypothesis
(H) uniquely defines a probability distribution, as it does in all Dembski's
examples and must do if we are to calculate a probability, then additional side
information (I) cannot possibly alter the probability of an outcome. From
Demsbki's examples, however, particularly the Caputo case (TDI, pp 166-167), it
appears that the CINDE criterion could be more clearly stated as the requirement
that the side information include only information known independently of the
outcome of the event. Using Dembski's words, I can say that my side information
meets the requirement, because the contents of my English dictionary "can be
known and exhibited irrespective of" my receipt of any text file (TDI, p 167).
- TRACT. In formal terms, Dembski states this criterion as "phi(D|I) <
lambda" (TDI, p 152). But, since he gives us no procedure for selecting a
complexity measure phi or a limit lambda, it's unclear how we should apply the
criterion. In less formal terms, Dembski states the criterion as "...a matter of
confirming that the problem of formulating D [the specification] on the basis of
I [the side information] is tractable for S [a subject]" (TDI, p 146). It's not
entirely clear what this means, but at least one subject (me) had no difficulty
formulating the specification, so I assume that the criterion is met. (This
criterion seems to be redundant. Dembski's method already requires us to
formulate a specification. If the problem of formulating a specification is
intractable, then we obviously cannot apply the method. Why does this need to be
stated as an additional criterion? To quote from Fitelson, Stephens and Sober
[4]: "Because tractability depends on your choice of language and computational
procedures, we think that TRACT has no evidential significance at all.
- DELIM. The word "design" will be in our English dictionary, so the
delimiter condition is satisfied.
So it seems that my proposed specification is, according to Dembski's
criteria, a valid one. However, in reality it is nothing more than a
fabrication, because I chose my side information (and hence my specification) in
an ad hoc fashion to fit the pattern that I noticed in the text file, namely the
pattern of an English word. Had I received the sequence "dessein", I could have
selected the specification "any French word." Had I received the sequence
"jogfsfodf", I could have selected the specification "any English word with the
letters shifted to the next in the alphabet" (shifting the letters back again
gives the word "inference"). And so on. Each of these possible sequences, and
many others besides, could be considered just as noteworthy as "design", so our
specification should be wide enough to include all of them, and we should
calculate the probability of any of these sequences occurring. Dembski's method
causes us to calculate far too low a probability in this case, which means we're
liable to reject the chance hypothesis too easily.
The fundamental flaw in Dembski's specification method is this. While the
CINDE criterion prevents us from selecting side information containing
information about the outcome of the event, it doesn't prevent us from making
our selection based on the outcome of the event. Since we choose our side
information in the light of the outcome, it is not independent of the outcome,
and therefore neither is the specification.
Before leaving the subject of specification, I'd like to consider how the
concept might be applied to the phenomenon in which ID proponents are really
interested--the origin of biological structures. Although Dembski has failed to
give an adequate account of specification, I nevertheless claim that "the origin
of intelligent life" is a specified event, because, without it, there would be
no-one here to formulate the specification. Thus, this cannot be an ad hoc
fabrication just concocted to fit the observed event. It is an essential
prerequisite of the observed event. It also follows that any event which is an
essential prerequisite of intelligent life (such as "the origin of life") is
also a specified event. The problem for the ID proponent, however, is that such
specifications are probably too vague to be of any practical use, as we cannot
imagine all the different forms that life might take.
Consider for example the common Creationist argument that this or that
biological molecule could not have occurred by chance. Let's take hemoglobin as
an example. While the typical creationist would probably attempt to calculate
the probability of the origin of hemoglobin in the specific form that we see it
today, a proponent of Dembski's methods might recognise the need for a wider
specification, and might choose as his specification "any molecule which would
perform the same function as hemoglobin". Perhaps it might even be possible to
place reasonable bounds on the range of possible forms that such a molecule
might take, enabling a probability to be calculated. However, this would still
not be an adequate specification unless we could establish that this function is
essential for intelligent life. Otherwise, we would simply be fitting the
specification in an ad hoc way to the particular form of life that we observe.
Associated with the problem of specification is the problem of establishing a
probability bound, i.e. how "small" does a probability have to be before we can
reject the chance hypothesis? I claim that Dembski's approach to probability
bounds is also fundamentally flawed. I don't propose to discuss this issue in
detail here, since, without an adequate specification, the problem of
establishing a probability bound does not even arise. Suffice to say that
Dembski's failure to establish a probability bound in the Caputo case is, in my
opinion, indicative of the problems with his method. In the case of the origin
of life, however, since this is an essential prerequisite to our being here as
observers, I consider Dembski's universal probability bound (1/2 x 1/10^150) to
be reasonable, indeed conservative, providing we assume that there are no other
universes besides the one in which we exist (or only a small number of such
universes). Dembski considers the possible relevance of other universes, but
dismisses it as "the inflationary fallacy" (TDI, pp 214-217). While he's right
to dismiss the relevance of other universes in the general case, such as Mother
Teresa turning out to have been an ax murderer (Dembski's example, not mine!),
he cannot do so for the case of the origin of life, because of the well-known
"selection effect". Dembski himself describes and accepts the relevance of the
selection effect with regard to the number of planets on which life could
potentially have originated (TDI, pp 182-183), and exactly the same argument
applies with regard to the number of universes in which life could potentially
have originated. I'm not saying that other universes actually exist. That is a
moot point. But, if they do exist, they are certainly relevant to the
probability of the origin of life.
2. What is the Design InferenceAs far as I have been able to determine,
the Design Inference amounts to no more than the following: once we've
eliminated all regularity and chance hypotheses that could explain a phenomenon,
we must conclude that the phenomenon was the result of design, and the way that
we eliminate chance hypotheses is by using the Law of Small Probabilities.
Unfortunately, this simple idea is couched in a lot of confusing and equivocal
language, making it difficult to understand exactly what Dembski means. As a
result, there are a number of substantially different interpretations of the
Design Inference in circulation, among both supporters and critics of Dembski. I
have attempted to obtain clarification of these issues from Dembski, but none
has been forthcoming. I therefore cannot be sure that my interpretation of
Dembski is correct, but I believe the interpretation I've arrived at is the most
charitable one, indeed the only one that makes any sense of the Design
Inference.
Note. Dembski initially refers to his procedure for detecting design as the
Explanatory Filter and to the underlying logical argument as the Design
Inference. At times, however, he refers to the procedure as the Design
Inference. For simplicity, I therefore treat the two terms as synonymous, and
deliberately use the term Design Inference in both senses.
2.1 The Design Inference is Eliminative
The Design Inference infers design by eliminating chance and regularity as
explanations. Furthermore, it's not sufficient to eliminate just one chance
explanation. The formal logic of the Design Inference, on pages 50-51 of TDI,
makes clear that we must consider and reject "...all the relevant chance
hypotheses that could be responsible for E [the observed event]...".
I believe that some readers of Dembski have missed this vital point, and
mistakenly come to the conclusion that only one chance hypothesis need be
considered. This may be because all but one of Dembski's examples entertain only
a single chance hypothesis (and that one occurs only in a footnote), or it may
be because of several misleading passages, such as the following:
'By contrast, a successful design inference sweeps the field clear of chance
hypotheses. The design inference, in inferring design, eliminates chance
entirely, whereas statistical hypothesis testing, in eliminating one chance
hypothesis, opens the door to others.' [TDI, p 7]
This passage seems to have it the wrong way around. It is not inferring
design that eliminates chance entirely; it is eliminating chance entirely that
allows us to infer design.
Because this is such a vital point, and yet has been widely misunderstood,
let me quote some more passages which show that the Design Inference requires
all chance hypotheses to be considered and rejected before design can be
inferred:
'The safecracking example presents the simplest case of a design inference.
Here there is only one possible chance hypothesis to consider, and when it is
eliminated, any appeal to chance is effectively dead. Design inferences in which
multiple chance hypotheses have to be considered and then eliminated arise as
well. We might, for instance, imagine explaining the occurrence of a hundred
heads in a row from a coin that is either fair or weighted in favor of heads
with probability of heads 0.75. To eliminate chance and infer design in this
example we would have to eliminate two chance hypotheses, one where the
probability of heads is 0.5 (i.e., the coin is fair) and the other where the
probability of heads is 0.75. To do this we would have to make sure that for
both probability distributions a hundred heads in a row is an SP [small
probability] event, and then show that this event is also specified. In case
still more chance hypotheses are operating, design follows only if each of these
additional chance hypotheses gets eliminated as well, which means that the event
has to be an SP event with respect to all the relevant chance hypotheses and in
each case be specified as well.' [TDI, p 44, footnote]
'Since an event has to have small probability to eliminate chance, and since
the design inference infers design by eliminating all relevant chance
hypotheses, SP(E;H) has to be satisfied for all H in [curly H].' [TDI, p 52]
'Given a chance hypothesis H that could conceivably explain E, S [a subject]
is obliged to retain H as a live possibility until a positive warrant is found
for rejecting it.' [TDI, p 220]
Note that Dembski refers above to "...eliminating all relevant chance
hypotheses..." (my emphasis). He uses the word "relevant" several times in this
context, but never elaborates on what he means by it. However, the final passage
quoted above indicates that we must eliminate any chance hypothesis we can think
of that could conceivably explain the observed event. It should be clear from
this that any inference of design is only as secure as our ability to think of
the relevant chance hypotheses. The true explanation for the event may be a
chance hypothesis of which we are unaware, and in that case we are liable to
erroneously arrive at a conclusion of design. Dembski, however, with assertions
such as "The sp/SP events of the Explanatory Filter exclude chance
decisively..." (TDI, p 41), gives the impression that such errors cannot occur.
Furthermore, elsewhere he makes this claim explicit:
'I argue that the explantory [sic] filter is a reliable criterion for
detecting design. Alternatively, I argue that the Explanatory Filter
successfully avoids false positives. Thus whenever the Explanatory Filter
attributes design, it does so correctly.' [2]
This claim is quite clearly untenable. Since it predates TDI, and is not made
explicitly in TDI, perhaps we should not take it too seriously. Indeed, even a
close associate of Dembski, with whom I raised the issue, was surprised by it.
However, if Dembski no longer wishes to stand by this claim, he should clarify
the issue by making a clear retraction.
2.2 Chance Hypotheses
Unfortunately, Dembski never defines the vital term "chance hypothesis." This
omission becomes particularly problematical when we try to apply the Design
Inference to biological structures. I propose that "Darwinian evolution" is a
relevant chance hypothesis for the origin of biological structures. But many
readers of Dembski, both critics and supporters, have disagreed, on the grounds
that Darwinian evolution is a combination of chance and regularity, not just
chance. Dembski himself has not clearly stated his position on this. But it
seems to me that the position of those who claim it is not a chance hypothesis
is untenable, for the following three reasons:
- If it isn't a relevant chance hypothesis, and therefore doesn't have to be
considered and rejected before we can infer design, then the Design Inference is
clearly nonsensical. How can we conclude design through an eliminative procedure
without eliminating the natural explanation which is most widely held by the
scientific community?
- All chance events (except maybe quantum events) are subject to the effect
of non-probabilistic natural laws, and therefore involve an element of
regularity. For example, the roll of a die is subject to mechanical forces.
Since all chance hypotheses involve an element of regularity, it makes no sense
to distinguish chance hypotheses from chance-and-regularity hypotheses.
- Dembski himself has written elsewhere:
'Does nature exhibit actual specified complexity? This is the million dollar
question. Michael Behe's notion of irreducible complexity is purported to be a
case of actual specified complexity and to be exhibited in real biochemical
systems (cf. his book Darwin's Black Box). If such systems are, as Behe claims,
highly improbable and thus genuinely complex with respect to the Darwinian
mechanism of mutation and natural selection and if they are specified in virtue
of their highly specific function (Behe looks to such systems as the bacterial
flagellum), then a door is reopened for design in science that has been closed
for well over a century. Does nature exhibit actual specified complexity? The
jury is still out.' [5]
This makes it pretty clear that Dembski himself considers the "Darwinian
mechanism" to be a chance hypothesis that must be rejected in order to infer
design.
I therefore assume that "Darwinian evolution" is a relevant chance hypothesis
for the origin of biological structures.
Furthermore, evolutionary biologists do not claim that their understanding of
evolution is perfect. There are many uncertainties and unknowns. So any attempt
to rule out "Darwinian evolution" must consider all possible variations of the
evolutionary process. Thus, in practice "Darwinian evolution" is not just a
single chance hypothesis, but a whole family of chance hypotheses, and the
Design Inference requires us to reject all of them before inferring design.
2.3 Regularities
Dembski tells us that there are two categories of explanation that must be
rejected before we can infer design: regularity and chance. But what's the
difference between a regularity explanation and a chance explanation? According
to Dembski a regularity explanation is one according to which "...E is highly
probable..." (TDI, p 38). What's more, even deterministic events can be treated
as highly probable ones:
'It is convenient to think of all such regularities as probabilistic,
assimilating the nonprobabilistic case to the probabilistic case in which
probabilities collapse to 0 and 1.' [TDI, p 38]
Since Dembski never establishes any reason why highly probable events should
be treated differently from the events of "intermediate probability" which
correspond to chance explanations, or gives any method for establishing a
boundary value to separate the two categories, there is no theoretical reason
why we should not treat regularities as a type of chance hypothesis; they just
happen to have higher probabilities.
2.4 Design vs Intelligent Agency
Dembski has caused considerable confusion through his peculiar use of the
terms "design" and "intelligent agency". He defines "design" as "the
set-theoretic complement of the disjunction regularity-or-chance" (TDI, p 36),
i.e. it's what is left over after eliminating regularity and chance
explanations. But he also specifically denies that an inference of "design"
necessarily entails intelligent agency:
'Thus, even though a design inference is frequently the first step toward
identifying an intelligent agent, design as inferred from the design inference
does not logically entail an intelligent agent. The design that emerges from the
design inference must not be conflated with intelligent agency.' [TDI, p 9]
On the other hand, he never gives any additional criterion for distinguishing
intelligent agency from mere "design", and he goes on to write:
'Yet in practice, to infer design is not simply to eliminate regularity and
chance, but to detect the activity of an intelligent agent.' [TDI, p 62]
I will not attempt to reconcile these apparently contradictory statements.
Instead I will simply assume that the latter statement accurately reflects
Dembski's position, and will ignore the former statement. In other words, I will
assume Dembski is claiming that rejection of chance and regularity does allow us
to infer the involvement of an intelligent agent. This then allows us to
consider the terms "design" and "intelligent agency" to be synonymous (and I
assume they are both synonymous with the more widely used term "intelligent
design"). If this assumption is not correct, then Dembski needs to clearly state
what additional criterion is necessary to distinguish intelligent agency from
mere "design", or else recognise that the Design Inference cannot detect
intelligent agency.
The question then is whether Dembski is justified in claiming that
eliminating regularities and chance allows us to infer design/intelligent
agency. Well, it seems to me that this is trivially true. As explained above,
the Design Inference is eliminative. When we infer design, we are effectively
saying: "We've eliminated all the non-design explanations we can think of, and
we assume that there are no more non-design explanations that we can't think
of." It follows (providing our assumption is correct) that the true explanation
must involve design. Because of this eliminative approach, we conveniently have
no need to define design/intelligent agency. It can mean whatever we want it to
mean, as long as it excludes all the regularity and chance explanations that
we've explicitly rejected. This is useful, because, as far as I can tell,
proponents of Intelligent Design have not clearly defined what they mean by the
term.
2.5 Specified Complexity and CSI
Another factor which has contributed to confusion over the Design Inference
has been Dembski's use of the terms "specified complexity" and "CSI" (Complex
Specified Information). Althought TDI explores the the relationship between
probability, complexity and information, it doesn't use the terms specified
complexity or CSI. For explanations of these terms, the reader must refer to
other articles by Dembski [5] [6]. It appears, however, that these two terms are
eqivalent, and that their introduction involves no change in the method of the
Design Inference. All that's happened is that Dembski has transformed the
calculated probabilities, and the probability bounds with which they're
compared, by applying a simple function: taking the logarithm to base 2 and then
negating, i.e. specified complexity or CSI -log2(P), where P is the
probability of a specified event or a probability bound. Since the same
transformation has been applied to both sides of an inequality, there is no net
effect. If this interpretation is not correct, then Dembski needs to clearly
state how his method changes with the use of these terms, and why. The point is
significant, because, although these terms apparently add nothing to Dembski's
method, they do serve to obfuscate the issues in two important ways.
First, the Design Inference requires the probability of a specified event to
be calculated (and shown to be small) for each relevant chance hypothesis.
However, when Dembski uses the term specified complexity (or CSI), he generally
refers simply to the specified complexity of an event or phenomenon, ignoring
the fact that the specified complexity (which is a function of probability) must
be defined with respect to a particular chance hypothesis. Thus, he obscures the
fact that, if there is more than one relevant chance hypothesis, then the event
or phenomenon has more than one measure of specified complexity. Furthermore, if
we know that an object is designed (let's say we watched it being made), does it
make any sense to say it has specified complexity? If we know the true
explanation, and it is design, then there are no relevant chance hypotheses with
respect to which we can calculate the specified complexity!
Second, Dembski's use of the terms specified complexity, information and CSI
is misleading, because he conflates his own use of these terms with those of
other writers, without ever establishing that both he and they are using them in
the same sense. For example:
'As Davies puts it: "Living organisms are mysterious not for their complexity
per se, but for their tightly specified complexity" (p. 112).' [5]
'Manfred Eigen, for instance, writes, "Our task is to find an algorithm, a
natural law that leads to the origin of information," where by "information" I
understand him to mean specified complexity.' [5]
'CSI is what all the fuss over information has been about in recent years,
not just in biology, but in science generally. It is CSI that for Manfred Eigen
constitutes the great mystery of biology, and one he hopes eventually to unravel
in terms of algorithms and natural laws. It is CSI that for cosmologists
underlies the fine-tuning of the universe, and which the various anthropic
principles attempt to understand (cf. Barrow and Tipler, 1986). It is CSI that
David Bohm's quantum potentials are extracting when they scour the microworld
for what Bohm calls "active information" (cf. Bohm, 1993, pp. 35-38). It is CSI
that enables Maxwell's demon to outsmart a thermodynamic system tending towards
thermal equilibrium (cf. Landauer, 1991, p. 26). It is CSI on which David
Chalmers hopes to base a comprehensive theory of human consciousness (cf.
Chalmers, 1996, ch. 8). It is CSI that within the Kolmogorov-Chaitin theory of
algorithmic information takes the form of highly compressible, non-random
strings of digits (cf. Kolmogorov, 1965; Chaitin, 1966).' [6]
'Evolutionary biology has steadfastly resisted attributing CSI to intelligent
causation. Although Manfred Eigen recognizes that the central problem of
evolutionary biology is the origin of CSI, he has no thought of attributing CSI
to intelligent causation. According to Eigen natural causes are adequate to
explain the origin of CSI. The only question for Eigen is which natural causes
explain the origin of CSI. The logically prior question of whether natural
causes are even in-principle capable of explaining the origin of CSI he
ignores.' [6]
In this way, Dembski gives the impression that the existence of specified
complexity or CSI in nature is already widely accepted. But, if these other
writers are using the terms to mean different things, then this impression is a
false one. While I'm not familiar with the work of these other writers, I'm
highly doubtful that they are using these terms with the same meaning as
Dembski. In any case, the onus is on Dembski to show that the meanings are
equivalent before using the terms in this way. (I doubt whether the term CSI is
actually used at all by the cited writers. I suspect Dembski is just giving his
own interpretation of their work.)
2.6 Summary of Section 2
- The precise nature of the Design Inference is unclear from Dembski's
writings.
- By the most charitable interpretation of the Design Inference (the only one
that seems to make any sense), it amounts to no more than the simple statement
that, once we've eliminated all the non-design hypotheses that we can think of,
we should conclude that the phenomenon was the result of design, and the way
that we eliminate chance hypotheses is by using the Law of Small Probabilities.
- The Design Inference cannot rule out false positives, as has been claimed
by Dembski. After making an inference of design, the possibility always remains
that we have overlooked the true non-design explanation.
- Dembski's introduction of the terms specified complexity and CSI has done
nothing but add to the confusion surrounding the Design Inference.
I suspect that Dembski will respond to these charges (if he responds at all)
by saying that I've misunderstood him. If this turns out to be the case, then I
must point out that such misunderstandings are rife among Dembski's supporters
as well as his opponents. Even his closest associates don't seem to understand
the Design Inference, and Dembski himself refuses to answer questions about it.
Thus, the blame for any misunderstanding must rest squarely on Dembski's
shoulders. If my interpretation of the Design Inference is incorrect, then it's
time for Dembski to state clearly and publicly exactly what the method of the
Design Inference is.
3. Do the CalculationIn a passage quoted above, Dembski wrote: "Does
nature exhibit actual specified complexity? The jury is still out." Elsewhere,
however, he has claimed that specified complexity has been detected in
biochemical systems, and that therefore design has been found in nature:
"So there exists a reliable criterion for detecting design strictly from
observational features of the world. This criterion belongs to probability and
complexity theory, not to metaphysics and theology. And although it cannot
achieve logical demonstration, it does achieve a statistical justification so
compelling as to demand assent. This criterion is relevant to biology. When
applied to the complex, information-rich structures of biology, it detects
design. In particular, we can say with the weight of science behind us that the
complexity-specification criterion shows Michael Behe's irreducibly complex
biochemical systems to be designed." [7]
An almost identical passage appears in Dembski's book "Intelligent Design"
[8].
In TDI (p 228), Dembski stresses:
"There is a calculation to be performed. Do the calculation. Take the numbers
seriously. See if the underlying probabilities really are small enough to yield
design."
And:
"I stress again, Do the probability calculation
So the question is, where's the probability calculation that justifies the
claim made by Dembski above? Well, I've searched high and low, as have other
critics, and I have been unable to find any such calculation. No such
calculation appears in TDI or in any other work of Dembski's that I've seen.
From the quote above, one might assume that the calculation can be found in
Behe's work. However, although Dembski writes that "the CSI of a flagellum far
exceeds 500 bits" [8, p 178], he provides no calculation to support that claim,
and there is no such calculation in Behe's book, "Darwin's Black Box" [9].
Indeed, Behe's probabilistic claims seem to be based on intuition and not
calculation:
"Systems requiring several parts to function that need not be well-matched,
we can call "simple interactive" systems (designated 'SI'). Ones that require
well-matched components are irreducibly complex ('IC'). The line dividing SI and
IC systems is not sharp, because assignment to one or the other category is
based on probabilistic factors which often are hard to calculate and generally
have to be intuitively estimated based on always-incomplete background
knowledge. Moreover, no law of physics automatically rules out the chance origin
of even the most intricate IC system. As complexity increases, however, the odds
become so abysmally low that we reject chance as an explanation (Dembski 1998)."
[10] [The reference is to TDI.]
In case I'd overlooked some relevant calculation, I wrote to Dembski asking
for a citation. Although he replied to my email, he failed to cite any
calculation, and simply repeated the flagellum claim quoted above. A similar
request sent by Wesley Elsberry has (as of the date of writing) gone unanswered.
Dembski's failure to provide empirical data in support of his claim not only
shows that claim to be hollow, but also prevents us from arriving at a clearer
understanding of the method of the Design Inference. If Dembski presented a
detailed working of an application of the Design Inference to a biological
structure, that should help resolve the various uncertainties mentioned above.
Supporters of Dembski have sometimes claimed that this or that calculation is
an application of the Design Inference. However, given the uncertainties about
what an application of the Design Inference entails, it is impossible to accept
such claims coming from any source other than Dembski himself.
So I call on Dembski either to cite a calculation which substantiates his
claim, or else to retract the claim.
If Dembski could genuinely show that the probability of any sort of life
evolving in this universe was below his universal probability bound, then I
believe he would be justified in claiming that life did not evolve, providing we
don't resort to a multiplicity of universes. But this idea is hardly a new one,
and we didn't need all the panoply of TDI to tell us this. As Dembski himself
points out (TDI, pp 55-62), this sort of argument is already widely accepted by
people on all sides of the debate, including Dawkins, an arch-opponent of ID.
The problem for ID proponents is that no-one has yet been able to perform a
valid probability calculation of this sort, and it seems unlikely that anyone
will be able to do so in the foreseeable future. All Dembski has achieved with
his Design Inference has been to muddy the waters, and to give a false aura of
respectability to the sort of bogus probability calculations that creationists
have been trotting out for years.
References
[1] Dembski, W.A., "The Design Inference: Eliminating Chance Through Small
Probabilities", Cambridge University Press, 1998.
[2] Dembski, W.A., "The Explanatory Filter: A three-part filter for
understanding how to separate and identify cause from intelligent design", an
excerpt from a paper presented at the 1996 Mere Creation conference, originally
titled "Redesigning Science", http://www.arn.org/docs/dembski/wd_explfilter.htm.
[3] Dembski, W.A., "How Not to Analyze Design",
http://www.baylor.edu/~William_Dembski/docs_critics/eells.htm.
[4] Fitelson, Stephens and Sober, "How Not To Detect Design", Philosophy of
Science 66 (3), 1999, http://philosophy.wisc.edu/fitelson/DEMBSKI.PDF.
[5] Dembski, W.A., "Explaining Specified Complexity", Metaviews 139
(www.meta-list.org), 1999,
http://www.baylor.edu/~William_Dembski/docs_articles/meta139.htm.
[6] Dembski, W.A., "Intelligent Design as a Theory of Information", 1998,
http://www.arn.org/docs/dembski/wd_idtheory.htm.
[7] Dembski, W.A., "Science and Design", First Things 86, 1998,
http://www.baylor.edu/~William_Dembski/docs_articles/oct98FT.htm.
[8] Dembski, W.A., "Intelligent Design: The Bridge Between
Science Theology", InterVarsity Press, 1999.
[9] Behe, M.J., "Darwin's Black Box", Simon & Schuster, 1998.
[10] Behe, M.J., "Self-Organization and Irreducibly Complex Systems: A Reply
to Shanks and Joplin", Philosophy of Science 67 (1), 2000,
http://www.discovery.org/viewDB/index.php3?program=CRSC%20Responses&command=view&id=465.
Originally posted at Metanexus: The Online Forum on Religion and Science.
|
|