PRINT

The "Algorithm Room":
Can the "Design Inference"
Catch a Cheater?

By Wesley R. Elsberry

Posted May 6, 2002

William A. Dembski's writings claim that algorithms cannot produce Complex Specified Information (CSI), but intelligent agents can. A recent posting of Dembski's introduced qualifiers to CSI, so that we now have "apparent CSI" and "actual CSI". Dembski categorizes as "apparent CSI" those solutions which meet the formerly given criteria of CSI, but which are produced via evolutionary computation. This is contrasted with "actual CSI", in which a solution meets the CSI criteria and which an intelligent agent produces. See The Anti-Evolutionists: William A. Dembski and follow the link for "Explaining Specified Complexity".

Dembski is also fond of both practical and hypothetical illustrations to make his points. I'd like to propose a hypothetical illustration to explore the utility of the "apparent CSI"/"actual CSI" split.

Let's say that we have an intelligent agent in a room. The room is equipped with all sorts of computers and tomes on algorithms, including the complete works of Knuth. We'll call this the "Algorithm Room". We pass a problem whose solution would meet the criteria of CSI into the room (say, a 100-city tour of the TSP or perhaps the Zeller congruence applied to many dates). Enough time passes that our intelligent agent could work the problem posed from first principles by hand without recourse to references or other resources. The correct solution is passed out of the room, with an statement from the intelligent agent that no computational or reference assistance was utilized. Under those circumstances, we pay our intelligent agent at a high consultant rate. But if our intelligent agent simply used the references or computers, he would get paid at the lowly computer operator rate. We suspect that our intelligent agent not only utilized the references or computers to accomplish the task, but that he also used the time thus freed up to do some light reading, like "Once is Not Enough".

There are four broad categories of possible explanation of the solution that was passed back out of the "Algorithm Room". First, our intelligent agent might have employed chance, throwing dice to come up with the solution, and then waiting an appropriate period to pass the solution out. Given that the solution actually did solve the problem passed in, we can be highly confident that this category of explanation is not the actual one. Second, our intelligent agent might have ignored every resource of the "Algorithm Room" and spent the entire time working out the solution from the basic information provided with the problem (distances between cities or dates in question). Third, our intelligent agent might have gone so far as to look up and apply, via pencil and paper, some appropriate algorithm taken from one of the reference books. In this case, the sole novel intelligent action on our agent's part was looking up the algorithm. Essentially, our agent utilized himself as a computer. Fourth, our intelligent agent might simply have fed the basic data into one of the computers and run an algorithm to pop out the needed solution. Again, the intelligent agent's deployment of intelligence stopped well short of being applied to produce the actual solution to the problem at hand.

Because we suspect cheating, we wish to distinguish between a solution that is the result of the third or fourth categories of action, and a solution that is the result of the second category of action of our intelligent agent. We have only the attributes of the provided solution to the problem to go upon. Can we make a determination as to whether cheating happened or not?

Dembski's article, "Explaining Specified Complexity", critiques a specific evolutionary algorithm. Dembski does not dispute that the solution represents CSI, but categorizes the result as "apparent CSI" because the specific algorithm critiqued must necessarily produce it. Dembski then claims that this same critique applies to all evolutionary algorithms, and Dembski includes natural selection within that category.

The question all this poses is whether Dembski's analytical processes bearing upon CSI can, in the absence of further information from inside the "Algorithm Room", decide whether the solution received was actually the work of the intelligent agent (and thus "actual CSI") or the product of an algorithm falsely claimed to be the work of the intelligent agent (and thus "apparent CSI")?

If Dembski's analytical techniques cannot resolve the issue of possible cheating in the "Algorithm Room", how does he hope to resolve the issue of whether certain features of biology are necessarily the work of an intelligent agent or agents? If Dembski has no solution to this dilemma, the Design Inference is dead.

Wesley R. Elsberry is a student in Wildlife & Fisheries Sciences, Tx A&M U.

* * *


Location of this article: http://wwww.talkreason.org/articles/Algorithm.cfm