In an article appearing in the October 1998 First Things, William A Dembski announced the existence of rigorous and reliable means for detecting the action of an intelligent agent. Its description and justification, said Dembski, would be found in the pages of his new book, The Design Inference (TDI). Dembski made a special point of applying a criterion he called complexity-specification to biological phenomena, with the claim that biologists must now admit design into their science.
Dembski's TDI is a slim and scholarly volume, as one expects from a distinguished academic press. Dembski employs clear writing, illustrative examples, and cogent argumentation. The work, though, is motivated and informed by an anti-evolutionary impulse, and its flaws appear to follow from the need to achieve a particular religious aim. The anti-evolutionary bent is not as overt here, though, as it is in other works by Dembski and his colleagues Phillip Johnson, Michael Behe, Paul Nelson, and Stephen Meyer at the Discovery Institute Center for the Renewal of Science and Culture. The closest that Dembski comes within the pages of TDI to staking out an explicit position on evolutionary issues is in Section 2.3, where a "case study" is made of "the creation-evolution controversy". In it, Dembski accuses evolutionary biologists of rejecting one or more premises of his Design Inference in order to avoid reaching a conclusion of design for biological phenomena. Of course, for "Intelligent Design" creationists, as it was for William Paley, it is not sufficient merely to prove that something was intelligently designed, it is also essential that the agent of design be identified as the God of the Bible. But TDI carefully avoids explicit religious referents, even separating "evidence for design" from "evidence of agency".
In my opinion, Dembski's Design Inference fails to identify reliably phenomena due to design by an intelligent agent because of its logic, and because it fails to consider additional mechanisms (like natural selection) that could produce a designed effect. In the following review, I shall attempt to explain why this is so.
Categories and Definitions
Dembski deploys a large number of specialized terms and phrases in making his argument that design must be recognized as a necessary mode of explanation in science. Fortunately, Dembski generally makes clear what each term means, even when it also has a common or casual usage. Design is one of those terms, and in Dembski's usage it becomes a category defined by the elimination of events that can be attributed to regularity or to chance. Regularity is equivalent to high probability - an event will that "(almost) always happen" (p 36). Chance applies to any event with intermediate or low probability, but for which no specification exists. A specified event conforms to a pattern that is determined in advance or can be given independently of the event.
"Specification" needs further description. Dembski illustrates the meaning of specifications which allow us to reject chance explanations by contrasting them to fabrications which do not. For an archer to hit 100 bull's-eyes is not chance; we would conclude that the archer had great skill. But if the pattern of 100 bull's-eyes was obtained by the archer's shooting the arrows and then drawing targets around them, we would not make the same conclusion. The pattern of 100 arrows and bull's-eyes would be the same in each situation, but because we had specified in advance certain characteristics (like the bull's-eye being on the wall before the arrow was shot), we can eliminate chance in the former situation and attribute the performance to skill.
Complexity-specification describes how the jointly-held attributes of complexity (events of low probability) and specification (previously-determined pattern) reveal the presence of design in an event. And design thus becomes any event with both a low probability and an independently-given pattern. Another way to look at Dembski's Design Inference is that complexity excludes high- and intermediate-probability events, specification excludes chance events, and regularity comprises events marked by high probability. Therefore, complexity-specification yields those events that fall into the exclusionary category of design as Dembski uses the term - events that are of low probability and not due to chance.
For Dembski, the Design Inference is a deductive argument which can lead to the recognition of complexity-specification, and thus design, for a particular event. Since these 3 categories (regularity, chance, and design) embrace all events, and design is established by elimination of the other two categories, design is thus the set-theoretical complement of regularity and chance.
Dembski applies what he calls his "Explanatory Filter" to determine design. Complete with flowchart (p 37), the Explanatory Filter has 3 decision nodes. In step 1, if an event is deemed to have high probability, it is classified as due to a regularity, or rather that the event can be explained through law-like physical processes. An as-yet unclassified event then moves on to the second decision node. If it has intermediate probability, it is classified as due to chance. Thus-far unclassified events (which have low probability) then move on to the third decision node. If the event both has a low probability and also conforms to a specification, it is classified as due to design; if it has low probability and is unspecified, it is classified as due to chance.
It is time to look more closely at Dembski's Design Inference, to find out whether it does allow us to detect design by the elimination of alternative mechanisms. The Design Inference is a deductive argument based on the elimination of alternatives. Such arguments only work if the conclusion is the result of exhausting the available alternatives. Dembski assures us that this is the case by defining design as what is left after regularity and chance have been eliminated. Thus, what "design" means depends upon the way that regularity and chance are eliminated.
Process of Elimination
Dembski offers 2 somewhat different methods for eliminating regularity. In the first, regularity is recognized if an event has a high probability of occurrence. This is part of his discussion of the Explanatory Filter. The second method identifies an event that conforms to relevant natural laws, but is not constrained by them, and thus is not attributable to those laws. This method is discussed in relation to Dembski's Design Inference (p 53). It is not clear that each of these 2 methods would classify the same set of events as not being due to regularity. This ambiguity increases our uncertainty concerning the residue that is left over to be classified as either chance or design.
Dembski throughout TDI claims that deduction leads ineluctably and conclusively to certain events' being due to design. The catch is that Dembski is using his own definition of design, where design is simply the residue that remains after chance and regularity are eliminated. But there are alternative filters that better fit reality. I will illustrate one such alternative with an example filter of my own.
My alternative Explanatory Filter has 4, not 3 nodes.
- First, an event that cannot be statistically distinguished from a random event is classified as due to chance.
- Next, an event that conforms to properties of known law-like physical processes is classified as being due to regularity.
- An event that conforms to known properties of similar events that are due to intelligent agents are classified as due to design.
- Finally, any event which has not yet been classified is now classified as being due to an unknown cause.
My alternative Explanatory Filter differs in several critical ways. First, the ordering of decisions is different. Dembski justifies his choice of order with an explication of explanatory priority (p 38-40). But Dembski's arguments for eliminating regularity before eliminating chance are neither convincing nor reflective of how people ordinarily explain things. Random events conform well to the null hypothesis (that is, that the event is due to chance and not to design or regularity) and should be eliminated first in determination of causation.
Dembski's own example of a pair of loaded dice to show why regularity has explanatory priority over chance demonstrates that his filter has the order reversed. He explains that because the loaded dice yield high probabilities that certain faces will come up, the explanation to be preferred is regularity. However, Dembski ignores the fact that in order to determine that regularity and not chance is at work with the loaded dice, we must compare the rolls of the dice to the expectation for "fair" dice. Only when chance has been eliminated can we then entertain the notion that the results for the particular loaded dice in question are due to a regularity. In point of fact, with sufficient testing and knowledge of the circumstances, the loaded dice example resolves into an instance of design, not regularity. This does not mean that design then has explanatory priority. Rather, it illustrates the superior explanatory power of my alternative filter in which chance must be considered and rejected before either regularity or design can be concluded.
A second advantage to my Explanatory Filter is its additional classification of unknown causation. This alternative recognizes that the set of knowledge used to make a classification can alter the classification. By allowing an event to be classified as having unknown causation, I simultaneously reduce the number of false classifications that will later be overturned by additional information and identify those events whose circumstances require further study in order to identify a causative factor. The use of unknown causation as a category is common in such day-to-day operations of humans looking for design in events, such as in forensics. Forcing final classification of events when knowledge is limited ensures that mistakes in classification will be made when Dembski's Explanatory Filter is employed.
A third advantage to my alternative Explanatory Filter is that the common meaning of "design" is retained as a reliable indicator of "agency". We recognize design in our day-to-day life because of prior experience with objects and events designed or caused by intelligent agents. It is important to recognize that there is a difference between a reliable classifier and a design detector. The goal of such an exercise should be to classify events accurately, not to just single out the designed ones.
Dembski utilizes the Search for Extra-Terrestrial Intelligence (SETI) project as an example of detecting design without particular knowledge of a designer. But SETI can only detect signals that possess certain properties known from prior experience of humans communicating via radio wavelengths. SETI works to find events that conform to our prior experience of how intelligent agents use radio wavelengths to communicate. SETI does not support the notion that novel design/designer relationships can be detected. ETI that communicate in ways outside human experience will be invisible to, and undetected by, SETI. The issue of agency, in fact, deserves more attention. Like many "Intelligent Design" creationists, Dembski tries to avoid mentioning the "designer", and in fact, promotes his Explanatory Filter as being superior because it supposedly separates agency from design (TDI p 8, 36, 226-7).
Design, Agency, and Natural Selection
One may wonder what TDI was supposed to accomplish, if design no longer means what Paley meant by it and the attribution of agency no longer necessarily follows from finding design. When he assures the reader that design does not imply agency, Dembski seems to want things both ways: one can detect design without implying agency, though one is justified in inferring agency when one sees design. But is it a secure inference? According to Dembski, because humans identify human agency using reasoning equivalent to the Explanatory Filter, the Explanatory Filter encapsulates our general method for detecting agency. Because TDI is equivalent to the Explanatory Filter, if we conclude design through the TDI, we also must conclude agency.
The apparent, but unstated, logic behind the move from design to agency can be given as follows:
- Some subset of objects known to be designed by an intelligent agent possess a common attribute (complexity-specification).
- This attribute is never found in objects known not to be designed by an intelligent agent.
- The attribute includes the property of directed contingency (choice).
- For all objects, if this attribute is found in an object, then we may conclude that the object was designed by an intelligent agent.
This is an inductive argument. Notice that by the second step, one must eliminate from consideration precisely those biological phenomena which Dembski wishes to categorize. In order to conclude intelligent agency for biological examples, the possibility that intelligent agency is not operative is excluded a priori. This is stacking the deck.An intelligent agent reveals itself by making choices, or in Dembski's terms, directed contingency. An intelligent agent chooses "from a range of competing possibilities" (p 62), and does so by actualizing "one among several competing possibilities", excluding the rest, and specifying (ahead of time) what is to be chosen. Dembski claims this triad of criteria - actualization-exclusion-specification - is sufficient for establishing that an intelligent agent has been at work and finds that design as he defines it is congruent with these criteria.
One large problem is that directed contingency or choice is not an attribute solely of events that result from the intervention of an intelligent agent. Both directed contingency and the triad itself can be explained quite adequately by natural selection as a cause. Actualization occurs as heritable variation arises. Exclusion results as some heritable variations lead to differential reproductive success. Specification occurs as environmental conditions specify which variations are preferred. One might thus conclude that Dembski's argument establishes that natural selection can be recognized as an intelligent agent. By my reading, Dembski's argument supports a position that biologists can embrace a conclusion of design for an event of biological origin and still attribute that event to the agency of natural selection.
It is an error to argue from the casual meanings of regularity, chance, and design when discussing causes for events classified by Dembski's Explanatory Filter or by TDI. Someone might seek to exclude natural selection from consideration as a source of events that meet the criteria of design by claiming that it is either a regularity or chance. But TDI classifies events, not causes. Dembski points this out himself when he says that using the Explanatory Filter may not always lead to a conclusion of design for an event that we know is due to the action of an intelligent agent, because agents can mimic the results of regularity or chance.
The point is more significant than Dembski admits. A causal class cannot be classified into regularity or chance in advance without begging the question. Specifically, one cannot state in advance that natural selection is either regularity or chance because the events which are due to natural selection must be evaluated by their own properties to establish which category best describes those events. Just as intelligent agents can sometimes produce events which appear to be due to regularity or chance rather than design, so too can natural selection be responsible for events in all 3 categories. It is insufficient to show that some examples of natural selection fall into either the "regularity" or "chance" explanatory categories. When arguing that no physical process is the agent producing a designed event, one must show that natural selection is incapable in principle of producing events with the attribute of design. Such a demonstration would have to address the application of natural selection in both biology and computer science, where use of the principle of natural selection has been employed in solving very difficult optimization problems.
In summary, the process of detecting design, as it is done by humans in day-to-day activities, is not accurately captured by Dembski's Explanatory Filter. The order in which classes of causes are eliminated makes a difference. Humans attempting to explain phenomena can and often do find insufficient evidence to make a final determination of either design or any other explanation. And when humans use the word design, they typically mean it to carry a real implication of being due to an agent or designer.
Second, Dembski's Explanatory Filter does not help us to identify the cause or the agent of the "specifications" which it seeks to classify. That there is an agent or that the agent is "intelligent" must be concluded prior to applying the Design Inference. Using Dembski's own criteria, we cannot rule out natural selection as a cause for the design found in the events and organisms around us. Somehow, I doubt that natural selection is what Dembski has in mind for the author of design.
Dembski utilizes the Explanatory Filter and equivalent logical arguments in order to place his criterion of design on a deductive footing. That criterion, complexity-specification, does not help us to identify a cause, or an agent, of an event. Its sole purpose is to detect design as Dembski employs the term. The step from detection of design to inference of an intelligent agent is made by an inductive argument, and shares in the problems of all conclusions drawn from an inductive basis. Dembski argues that a triad of criteria reliably diagnoses the action of an intelligent agent, yet this same triad of criteria fails to exclude natural selection as a possible cause of events that have the attribute of complexity-specification. Again, I doubt that natural selection is what Dembski had in mind for the agent of biological design.
The Design Inference is a work with great significance for those anti-evolutionists who have embraced "intelligent design" as their organizing principle and see that Dembski's TDI is supposed to establish the theoretical foundation for all the rest of the movement (see, for example, comments posted on the web at http://www.discovery.org/fellows/design.html). My judgment is that it fails to lay a solid foundation. There are flaws and cracks that can admit the entry of naturalistic causes into the pool of "designed" events. It is unfortunate that Dembski's focus is the establishment of "intelligent design" as an anti-evolutionary alternative, for his insights into elimination of chance hypotheses would appear to have legitimate application to various outstanding research questions, such as certain issues in animal cognition and intelligence. Despite Dembski's commentary in his First Things article, there appears to be no justification for the claim that biologists must now admit design (in its old, agency-laden sense) into biological explanation.