Skip navigation.
The Critic's Resource on AntiEvolution

Dembski, William A.

Criticism of Dembski's "Explanatory Filter": Vindicated

| | | | |

I’ve been saying that there were problems in William Dembski’s “explanatory filter” for a long, long time. Dembski has finally admitted that was the case.

At the February 1997 NTSE conference, when I brought up the “traveling salesman problem” solved by genetic algorithm as an example that countered Dembski’s EF, he responded that his logic was sound and his premises were true, therefore his conclusion followed. Dembski in that instant dismissed empirical data as having any bearing on his work. It only took the better part of twelve years for Dembski to repudiate the soundness of his logic presented then.

Typing Monkeys: History of an Idea

| | | | |

by Wesley R. Elsberry


It is difficult to find the originators of certain concepts which
pass quickly into general use. The analogy of monkeys typing
at random on typewriters and eventually reproducing copies of
literary works is one such concept.

In tracking down who might have originated the concept, we will
find people who definitely use or reference it, as well as variants
of how it is expressed. We will also explore limitations upon
who might have originated the concept or when the concept might
reasonably have been first told to a general audience.

Unacknowledged Errors in “Unacknowledged Costs” Essay


Back over the summer, William Dembski was talking up "Baylor's Evolutionary Informatics Laboratory", and one of the features there was a PDF of an essay critiquing the "ev" evolutionary computation program by Tom Schneider. Titled "Unacknowledged Information Costs in Evolutionary Computing", the essay by Robert J. Marks and William A. Dembski made some pretty stunning claims about the "ev" program. Among them, it claimed that blind search was a more effective strategy than evolutionary computation for the problem at hand, and that the search structure in place was responsible for most of the information resulting from the program. The essay was pitched as being "in review", publication unspecified. Dembski also made much of the fact that Tom Schneider had not, at some point, posted a response to the essay.

There are some things that Marks and Dembski did right, and others that were botched. Where they got it right was in posting the scripts that they used to come up with data for their conclusions, and in removing the paper from the "" site on notification of the errors. The posting of scripts allowed others to figure out where they got it wrong. What is surprising is just how trivial the error was, and how poor the scrutiny must have been to let things get to this point.

Now what remains to be seen is whether in any future iteration of their paper they bother to do the scholarly thing and acknowledge both the errors and those who brought the errors to their attention. Dembski at least has an exceedingly poor track record on this score, writing that critics can be used to improve materials released online. While Dembski has occasionally taken a clue from a critic, it is rather rarer that one sees Dembski acknowledge his debt to a critic.

In the current case, Marks and Dembski owe a debt to Tom Schneider, "After the Bar Closes" regular "2ndclass", and "Good Math, Bad Math" commenter David vun Kannon. Schneider worked from properties of the "ev" simulation itself to demonstrate that the numbers in the Marks and Dembski critique cannot possibly be correct. "2ndclass" made a project out of examining the Matlab script provided with the Marks and Dembski paper to find the source of the bogus data used to form the conclusions of Marks and Dembski. vun Kannon suggested an easy way to use the Java version of "ev" to quickly check the claims by Marks and Dembski.

(Also posted at the Austringer)

Dances with Popper: An Examination of Dembski's Claims on Testability

Posted by Wesley R. Elsberry on May 8, 2004 03:34 PM (Original)

Otto: Apes don't read philosophy.

Wanda: Yes, they do, Otto, they just don't understand it.

("A Fish Called Wanda")

In his new book, The Design Revolution, “intelligent design” advocate William A. Dembski invokes the late philosopher Sir Karl Popper as an authority on “testability” (ch. 39, pp.281-282).  Perhaps Dembski has read Popper, perhaps he hasn’t. It’s certain, though, that Dembski does not understand Popper, and has a long history of not understanding Popper. Which is surprising, because Popper was an extraordinarily accessible philosopher.

Dembski bases his chapter on “Testability” in The Design Revolution (ch.39) on an essay he posted to the Internet in 2001. Between these two, Dembski switches from the term “falsifiability” to “refutability” instead. This is an odd thing for Dembski to do. It is explainable as a response to criticism that I made of his use of “falsifiability” in 2001, as I showed then that Dembski’s use of “falsifiability” differed markedly from that of Popper, who defined its usage in science and philosophy. The new version of Dembski’s argument shows a continuing misunderstanding of Popper and overlooks the fundamental flaws in Dembski’s argument.

Specified Complexity Depends Upon Implicit Design Conjectures


William Dembski's No Free Lunch contains the following passage:

The presumption here is that if a subject S can figure out an independently given pattern to which E conforms (i.e., a detachable rejection region of probability less than alpha that includes E), then so could someone else. Indeed, the presumption is that someone else used that very same item of background knowledge -- the one used by S to eliminate H -- to bring about E in the first place.

[No Free Lunch, p. 75]

Because Dembski's framework is based upon the elimination of alternative explanations, what we end up with here is the situation that Dembski is attributing the complement of the probability that can be assigned to chance hypotheses to an implicit design conjecture, the one that underlies a particular "specification". When the "saturated" probability of the alternative is less than 1/2, Dembski says that we should prefer "design" as our causal explanation, and because we have this relationship between the specification and the putative causal story, we thus are adopting that particular causal conjecture.

Specified Complexity and Reliability


(Originally posted to, retrieved via Google Groups.)

From: "Wesley R. Elsberry"
Subject: Re: Designer as a Scientific Theory
Date: 1998/09/07
Message-ID: #1/1
X-Deja-AN: 388825198
Organization: Online Zoologists

In article ,
Ivar Ylvisaker wrote:
>Wesley R. Elsberry wrote:
>>In article

IY>[with much snipping]

Me too.


IY>I don't think that Dembski will accept Wesley's version of
IY>a filter that detects intelligent designers. Wesley's
IY>filter stage 3 passes only those phenomena that we know are
IY>caused by intelligent designers. I assume that Wesley is
IY>referring to man and, maybe animals as designers but not to
IY>unknown supernatural beings. Dembski wants to go further.

Yes, Dembski *wants* to go further. Unfortunately, we do not
have in hand the justification to do so. Is it contained in
Dembski's forthcoming book? Somehow, I doubt it, but I do
look forward to seeing Dr. Dembski try.

Finite Improbability Calculator


The Finite Improbability Calculator is a collection of routines to permit exploration of very small probabilities. Many antievolutionary arguments are based upon an argument from improbability: some phenomenon is so improbable that it must be due to an intelligent agent.


  1. Select an operation to perform from the list.
  2. Enter the parameters for the operation.
  3. Press the button for the operation.
  4. Results appear in a table at the top of this page.


Change of base


Old Base
New Base

Return to Operations list



Enter a positive integer in the box:

Return to Operations list

Permutation and Combination


Total elements
Selected elements

Return to Operations list

Specified Anti-Information


Length of uncompressed string:

Length of program/input pair that produces the string:

Number of different symbols in strings:

Return to Operations list

Dembski's p_origin and M/N ratio


Perturbation tolerance:
Perturbation identity:
Number of symbols:
Length of string:

Page numbers refer to "No Free Lunch".

Return to Operations list

Dembski's p_local


Number of items in system (e.g., 50):
Number of copies of each item (e.g., 5):
Number of possible substitutions per item (e.g., 10):
Total number of items available (e.g., 4289):

Page numbers refer to "No Free Lunch".

Return to Operations list

Dembski's p_perturb


Number of subunits (N):
Different types of subunits (k):
Perturbation tolerance factor (q):
Perturbation identity factor (r):

Page numbers refer to "No Free Lunch".

Return to Operations list

Error in dembskis


Error Measurement
Expected Value

Return to Operations list

Hazen Functional Complexity


N (number of possible configurations)
M(Ex) (number of functional configurations)

Return to Operations list

Notes on calculations

Factorial:  The point here is to permit calculation of factorial(n) where n can be a large number, say the number of proteins which an organism codes for.  However, even a "double" floating-point number overflows at 1.7e308.  So factorials are calculated here using a logarithmic representation.  The Stirling approximation is used for very large n, and a logarithmic version of the classical iterative method is used for smaller n.  Stirling's approximation is taken as

            n! ~ n^n e^(-n) sqrt(2 * pi * n) (1 + 1/(12n))

Change of base: Calculated as 

            new exponent = (old_exponent * ln(oldbase)) / ln(newbase)

Permutation and combination: Uses the factorial function discussed above.

            permutations =  n! / (n - k)!

            combinations = n! / k!(n - k)!

Specified Anti-Information

Specified Anti-Information is an application of the "universal distribution" of Kirchherr et alia 1997, expounded in Elsberry and Shallit 2003. SAI is a framework intended as an alternative to Dembski's "design inference". The SAI of a bit string is defined as

SAI = max(0,|y| - C(y))

where |y| is the length of the bit string of interest and C(y) is the Kolmogorov complexity of y. Since C(y) is uncomputable, mostly we should speak of Known Specified Anti-Information, which is just the maximum SAI that can be established by application of known compression techniques.

SAI is defined for bit strings, but often we deal with strings based on a symbol set with cardinality > 2. It is straightforward to determine the length of a bit string needed to represent such a string, though, using the "change of base" function presented earlier. The second part of the SAI section permits SAI to be calculated for such strings.

Something to note here is the apparent difference in ease of application of SAI with the various measures introduced by Dembski.

porig approximation (as per NFL p.301):

            porig ~ symbols^(-length (perturbation_tolerance - perturbation_identity))

The discussion on page 301 implies that functional proteins may themselves be considered "discrete combinatorial objects" to which this formula would apply.  With a little exploration, then, one can verify that any functional protein of length 1153 or greater has an origination probability smaller than Dembski's "universal small probability".

plocal calculation (as per NFL p.293):

            plocal = (units in system * substitutions / total different units) (units in system * copies)

M/N ratio approximation (as per NFL p.297):

            M/N ratio ~ ((combinations(length, tolerance * length) * (symbols-1)(tolerance * length)) / (combinations(length, identity * length) * (symbols-1)(identity * length)))

There is a discrepancy between the result which Dembski reports for his example calculation of an M/N ratio on p.297 and what the Finite Improbability Calculator reports.  Plug in symbols=30, length=1000, identity=0.2 and the result comes out as 5.555117e-223, whereas Dembski reports 10^-288, or a factor of 10^-65 off.  Jeff Shallit noted this error in Dembski's text some time back.

DCO pperturb approximations (as per NFL pp.299 and 300):

            pperturb (p.299) ~ ((combinations(length, tolerance * length) / (combinations(length, identity * length)) * (symbols-1)(length * (tolerance - identity))

            pperturb (p.300) ~ (symbols)(length * (tolerance - identity))

Error in dembskis

That error might be measured in a unit called "dembskis" that scaled things in terms of orders of magnitude came up in discussion of errors in an essay by Marks and Dembski. The reference unit of error for the measure is taken from the case mentioned above in the M/N ratio calculation note, where Dembski had an error of about 65 orders of magnitude. "Dave W." formalized the notion with an equation, and W. Kevin Vicklund suggested using a rounded-off value of 150 as the constant in the denominator, based upon Dembski's figure of 10^150 as a universal small probability. Thus, the final form of quantifying error in dembskis (Reed Cartwright proposed the symbol Δ) is

Δ = | ln(erroneous measure) - ln(correct measure) | / 150

There is not yet a consensus on what to term the unit, but two proposals being considered are "Dmb" and "duns".

Hazen Functional Complexity

The calculation is made per the 2007 PNAS paper by Hazen et al.. Given a number of possible configurations, N, and a (smaller) number of functionally equivalent configurations, M(Ex), one obtains the functional complexity metric, I(Ex) as:

I(Ex) = - log2(M(Ex) / N)


Dembski, William A. 2002. No Free Lunch. Rowman & Littlefield Publishers.

Elsberry, Wesley R. and Jeffrey Shallit. 2003. Information Theory, Evolutionary Computation, and Dembski's "Complex Specified Information".

Hazen RM, Griffin PL, Carothers JM, Szostak JW (2007) Functional information and the emergence of biocomplexity. Proc Natl Acad Sci U S A 104 Suppl 1:8574-81.

Kirchherr, W., M. Li, and P. Vitanyi. The miraculous universal distribution. Math. Intelligencer 19(4) (1997), 7-15.

The Finite Improbability Calculator was first coded in spring of 2002, following publication of William Dembski's book, "No Free Lunch". The original utilized a Perl CGI script. The FIC was ported to a PHP instantiation in January, 2004, with routines added for calculating Specified Anti-Information. The FIC then was altered to work within a Drupal page using the "PHP code" option.

The name of this page was inspired by "The Hitchhiker's Guide to the Galaxy" by the late great Douglas Adams.


Does ID Get a Pass?

On Friday, Feb. 3rd, I was able to pose a question to Greer-Heard Forum headliners Michael Ruse and William Dembski. Here's a transcript of that segment:

WRE:Actually I'm interested in a public policy aspect of this whole thing. Last month, I got on the Web of Science database search and looked up the term "cold fusion" and it came up with 900 papers there. "Cold fusion" is the poster child for the "not-ready-for-prime-time" physics theory, something that is not ready for going into 9th grade biology, no, physics textbooks. We see the process of science in things like plate tectonics, and the endosymbiotic theory, the neutral theory, and punctuated equilibria, these are things that have earned a place in the textbooks, because the people put in the work, they convinced the scientific community that they had a point, and that's why they're in the textbooks. So, what I'd like to hear from both of you is, is there a justification for giving intelligent design a pass on this process?

Ruse: No.

Dembski: That was short, but I think I can expand on that a little bit. A few years back, I wrote a paper, in fact I think I delivered it at a conference that I think that you attended, what was the title, Becoming a Disciplined Science, Pitfalls, Problems, various things confronting intelligent design, and in that paper I addressed what I thought a real concern for me that intelligent design would become in instrumental good used by various groups to further certain ends, but that the science would get short-shrifted, and I argued that the science was the intrinsic good, and indeed that's my motivation, ultimately. I could make my peace with Darwinism if I had to, and I'm sufficiently theologically astute to do the fancy footwork, but it's the science itself that I don't think holds up, and that's what motivates me to critique Darwinism and develop intelligent design. But as I argued in that paper, intelligent design has to be developed as a scientific program, otherwise you, you can't get a pass, I'm with you on that. And I was not a supporter of this Dover policy. Once it was enacted, once the Thomas More Law Center was going ahead with it, I did agree to be an expert witness there, but I think it is premature.

Dembski's "intelligent design" questions for teachers: Answered

William A. Dembski has a number of questions that he'd like to have students pose to their teachers. These questions quoted from Dembski are rendered in italics.

Here are some answers for the teachers to give their students in reply.

1. DESIGN DETECTION. If the universe, or some aspect of it, is intelligently designed, how could we know it? Do reliable methods for detecting design exist? What are they? Are such methods employed in forensics, archeology, and data fraud analysis? Could they conceivably detect design in biological systems?

A. We know about "design" because of our prior experience. Our appreciation of "design" comes about through an inductive process, as David Hume noted over two centuries ago. Nothing so far discovered by science has changed this. Scientists have developed a number of techniques and protocols to aid them in distinguishing artifacts that have been made by humans and other animals from things that have simply weathered or been subject to some non-volitional process. These techniques, however, have had nothing to do with conjectures posed by "intelligent design" advocates. Humans who alter the genetic information of bacteria and other organisms sometimes patent the exact changes made. Matching the genetic information in an organism to the changed genetic sequences would establish that the organism inherited that information from such an altered source. Again, this sort of process is not based upon and has not benefited from work done by "intelligent design"

Additional reading: The advantages of theft over toil, an article that clearly lays out problems in the sort of procedure that is common to "intelligent design" approaches to "detecting design". Information Theory, Evolutionary Computation, and Dembski's "Complex Specified Information" is an in-depth analysis of the claims that Dembski made concerning "design detection" up through 2003. There are also chapters devoted to this topic in Mark Perakh's Unintelligent Design and the anthology edited by Matt Young and Taner Edis, Why Intelligent Design Fails.

Of Frauds and Fingerprints

Over on his weblog, William Dembski has a post making reference to an article on a means of "fingerprinting" textured surfaces, like paper. It is an interesting article. But look what Dembski has to say about it:

The Logic of Fingerprinting

Check out the following article in the July 28th, 2005 issue of Nature, which clearly indicates how improbability arguments can be used to eliminate randomness and infer design: “‘Fingerprinting’ documents and packaging: Unique surface imperfections serve as an easily identifiable feature in the fight against fraud.” I run through the logic here in the first two chapters of The Design Inference.

Well, it is a little troubling how to proceed from this point. Did Dembski fail to read the article? Is Dembski simply spouting something that ID cheerleaders can nod sagely about without regard to whether it happens to accord with reality? Whatever excuse might be given, the plain fact of the matter is that the procedure and principles referred to in the short PDF Dembski cites have nothing whatever to do with Dembski's "design inference", and cannot be forced into the framework Dembski claims.

Syndicate content