Main   Terms   People   Interviews   Resources   Events

Can Specified Complexity Even Have a Mechanism?

What are the candidates here for something in nature that is nonetheless beyond nature? In my view the most promising candidate is specified complexity. The term "specified complexity" has been in use for about 30 years. The first reference to it with which I'm familiar is from Leslie Orgel's 1973 book The Origins of Life where specified complexity is treated as a feature of biological systems distinct from inorganic systems. Richard Dawkins also employs the notion in The Blind Watchmaker though he doesn't use the actual term (he refers to complex systems that are independently specified). In his most recent book, The Fifth Miracle Paul Davies (p. 112) claims that life isn't mysterious because of its complexity per se but because of its "tightly specified complexity." Stuart Kauffman in his just published Investigations (October 2000) proposes a "fourth law" of thermodynamics to account for specified complexity. Specified complexity is a form of information, though one richer than Shannon information, which focuses exclusively on the complexity of information without reference to its specification. A repetitive sequence of bits is specified without being complex. A random sequence of bits is complex without being specified. A sequence of bits representing, say, a progression of prime numbers will be both complex and specified. In The Design Inference I show how inferring design is equivalent to identifying specified complexity (significantly, this means that intelligent design can be conceived as a branch of information theory).

Most scientists familiar with specified complexity think that the Darwinian mechanism is adequate to account for it once one has differential reproduction and survival (in No Free Lunch I'll show that the Darwinian mechanism has no such power, though for now let's let it ride). But outside a context that includes replicators, no one has a clue how specified complexity occurs by naturalistic means. This is not to say there hasn't been plenty of speculation (e.g., clay templates, hydrothermic vents, and hypercycles), but none of this speculation has come close to solving the problem. Unfortunately for naturalistic origin-of-life researchers, this problem seems not to be eliminable since the simplest replicators we know require specified complexity. Consequently Paul Davies suggests that the explanation of specified complexity will require some fundamentally new kinds of natural laws. But so far these laws are completely unknown. Kauffman's reference to a "fourth law," for instance, merely cloaks the scientific community's ignorance about the naturalistic mechanisms supposedly responsible for the specified complexity in nature.

Van Till agrees that specified complexity is an open problem for science. At a recent symposium on intelligent design at the University of New Brunswick sponsored by the Center for Theology and the Natural Sciences (15-16 September 2000), Van Till and I took part in a panel discussion. When I asked him how he accounts for specified complexity in nature, he called it a mystery that he hopes further scientific inquiry will resolve. But resolve in what sense? On Van Till's Robust Formation Economy Principle, there must be some causal mechanism in nature that accounts for any instance of specified complexity. We may not know it and we may never know it, but surely it is there. For the design theorist to invoke a non-natural intelligence is therefore out of bounds. But what happens once some causal mechanism is found that accounts for a given instance of specified complexity? Something that's specified and complex is by definition highly improbable with respect to all causal mechanisms currently known. Consequently, for a causal mechanism to come along and explain something that previously was regarded as specified and complex means that the item in question is in fact no longer specified and complex with respect to the newly found causal mechanism. The task of causal mechanisms is to render probable what otherwise seems highly improbable. Consequently, the way naturalism explains specified complexity is by dissolving it. Intelligent design makes specified complexity a starting point for inquiry. Naturalism regards it as a problem to be eliminated. (That's why, for instance, Richard Dawkins wrote Climbing Mount Improbable To climb Mount Improbable one needs to find a gradual route that breaks a horrendous improbability into a sequence manageable probabilities each one of which is easily bridged by a natural mechanism.)

Lord Kelvin once remarked, "If I can make a mechanical model, then I can understand; if I cannot make one, I do not understand." Repeatedly, critics of design have asked design theorists to provide a causal mechanism whereby a non-natural designer inputs specified complexity into the world. This question presupposes a self-defeating conception of design and tries to force design onto a Procrustean bed sure to kill it. Intelligent design is not a mechanistic theory! Intelligent design regards Lord Kelvin's dictum about mechanical models not as a sound regulative principle for science but as a straitjacket that artificially constricts science. SETI researchers are not invoking a mechanism when they explain a radio transmission from outer space as the result of an extraterrestrial intelligence. To ask for a mechanism to explain the effect of an intelligence (leaving aside derived intentionality) is like Aristotelians asking Newton what it is that keeps bodies in rectilinear motion at a constant velocity (for Aristotle the crucial distinction was between motion and rest; for Newton it was between accelerated and unaccelerated motion). This is simply not a question that arises within Newtonian mechanics. Newtonian mechanics proposes an entirely different problematic from Aristotelian physics. Similarly, intelligent design proposes a far richer problematic than science committed to naturalism. Intelligent design is fully capable of accommodating mechanistic explanations. Intelligent design has no interest in dismissing mechanistic explanations. Such explanations are wonderful as far as they go. But they only go so far, and they are incapable of accounting for specified complexity.

In rejecting mechanical accounts of specified complexity, design theorists are not arguing from ignorance. Arguments from ignorance have the form "Not X, therefore Y." Design theorists are not saying that for a given natural object exhibiting specified complexity, all the natural causal mechanisms so far considered have failed to account for it and therefore it had to be designed. Rather they are saying that the specified complexity exhibited by a natural object can be such that there are compelling reasons to think that no natural causal mechanism is capable of producing it. Usually these "compelling reasons" take the form of an argument from contingency in which the object exhibiting specified complexity is compatible with but in no way determined by the natural laws relevant to its occurrence. For instance, for polynucleotides and polypeptides there are no physical laws that account for why one nucleotide base is next to another or one amino acid is next to another. The laws of chemistry allow any possible sequence of nucleotide bases (joined along a sugar-phosphate backbone) as well as any possible sequence of L-amino acids (joined by peptide bonds).

Design theorists are attempting to make the same sort of argument against mechanistic accounts of specified complexity that modern chemistry makes against alchemy. Alchemy sought to transform base into precious metals using very limited means like furnaces and potions (though not particle accelerators). Now we rightly do not regard the contemporary rejection of alchemy as an argument from ignorance. For instance, we don't charge the National Science Foundation with committing an argument from ignorance for refusing to fund alchemical research. Now it's evident that not every combination of furnaces and potions has been tried to transform lead into gold. But that's no reason to think that some combination of furnaces and potions might still constitute a promising avenue for effecting the desired transformation. We now know enough about atomic physics to preclude this transformation. So too, we are fast approaching the place where the transformation of a biological system that doesn't exhibit an instance of specified complexity (say a bacterium without a flagellum) to one that does (say a bacterium with a flagellum) cannot be accomplished by purely natural means but also requires intelligence.

There are a lot of details to be filled in, and design theorists are working overtime to fill them in. What I'm offering here is not the details but an overview of the design research program as it tries to justify the inability of natural mechanisms to account for specified complexity. This part of its program is properly viewed as belonging to science. Science is in the business of establishing not only the causal mechanisms capable of accounting for an object having certain characteristics but also the inability of causal mechanisms to account for such an object, or what Stephen Meyer calls "proscriptive generalizations." There are no causal mechanisms that can account for perpetual motion machines. This is a proscriptive generalization. Perpetual motion machines violate the second law of thermodynamics and can thus on theoretical grounds be eliminated. Design theorists are likewise offering in principle theoretical objections for why the specified complexity in biological systems cannot be accounted for in terms of purely natural causal mechanisms. They are seeking to establish proscriptive generalizations. Proscriptive generalizations are not arguments from ignorance.

Assuming such an in-principle argument can be made (and for the sequel I will assume it can), the design theorist's inference to design can no longer be considered an argument from ignorance. With such an in-principle argument in hand, not only has the design theorist excluded all natural causal mechanisms that might account for the specified complexity of a natural object, but the design theorist has also excluded all explanations that might in turn exclude design. The design inference is therefore not purely an eliminative argument, as is so frequently charged. Specified complexity presupposes that the entire set of relevant chance hypotheses has first been identified. This takes considerable background knowledge. What's more, it takes considerable background knowledge to come up with the right pattern (i.e., specification) for eliminating all those chance hypotheses and thus for inferring design. Design inferences that infer design by identifying specified complexity are therefore not purely eliminative. They do not merely exclude, but they exclude from an exhaustive set of hypotheses in which design is all that remains once the inference has done its work (this is not to say that the set is logically exhaustive; rather it is exhaustive with respect to the inquiry in question -- that's all we can ever do in science).

It follows that contrary to the frequently-leveled charge that design is untestable, design is in fact eminently testable. Indeed, specified complexity tests for design. Specified complexity is a well-defined statistical notion. The only question is whether an object in the real world exhibits specified complexity. Does it correspond to an independently given pattern and is the event delimited by that pattern highly improbable (i.e., complex)? These questions admit a rigorous mathematical formulation and are readily applicable in practice. Not only is design eminently testable, but to deny that design is testable commits the fallacy of petitio principii that is, begging the question or arguing in a circle (Robert Larmer developed this criticism effectively at the New Brunswick symposium adverted to earlier). It may well be that the evidence to justify that a designer acted to bring about a given natural structure may be insufficient. But to claim that there could never be enough evidence to justify that a designer acted to bring about a given natural structure is insupportable. The only way to justify the latter claim is by imposing on science a methodological principle that deliberately excludes design from natural systems, to wit, methodological naturalism. But to say that design is not testable because we've defined it out of existence is hardly satisfying or legitimate. Darwin claimed to have tested for design in biology and found it wanting. Design theorists are now testing for design in biology afresh and finding that biology is chock-full of design.

Specified complexity is only a mystery so long as it must be explained mechanistically. But the fact is that we attribute specified complexity to intelligences (and therefore to entities that are not mechanisms) all the time. The reason that attributing specified complexity to intelligence for biological systems is regarded as problematic is because such an intelligence would in all likelihood have to be unembodied (though strictly speaking this is not required of intelligent design -- the designer could in principle be an embodied intelligence, as with the panspermia theories). But how does an unembodied intelligence interact with natural objects and get them to exhibit specified complexity. We are back to Van Till's problem of extra-natural assembly.

Email link | Printer-friendly | Feedback | Contributed by: Dr. William Dembski

Topic Sets Available

AAAS Report on Stem-Cells

AstroTheology: Religious Reflections on Extraterrestrial Life Forms

Agency: Human, Robotic and Divine
Becoming Human: Brain, Mind, Emergence
Big Bang Cosmology and Theology (GHC)
Cosmic Questions Interviews

Cosmos and Creator
Creativity, Spirituality and Computing Technologies
CTNS Content Home
Darwin: A Friend to Religion?
Demystifying Information Technology
Divine Action (GHC)
Dreams and Dreaming: Neuroscientific and Religious Visions'
E. Coli at the No Free Lunchroom
Engaging Extra-Terrestrial Intelligence: An Adventure in Astro-Ethics
Evangelical Atheism: a response to Richard Dawkins
Ecology and Christian Theology
Evolution: What Should We Teach Our Children in Our Schools?
Evolution and Providence
Evolution and Creation Survey
Evolution and Theology (GHC)
Evolution, Creation, and Semiotics

The Expelled Controversy
Faith and Reason: An Introduction
Faith in the Future: Religion, Aging, and Healthcare in the 21st Century

Francisco Ayala on Evolution

From Christian Passions to Scientific Emotions
Genetic Engineering and Food

Genetics and Ethics
Genetic Technologies - the Radical Revision of Human Existence and the Natural World

Genomics, Nanotechnology and Robotics
Getting Mind out of Meat
God and Creation: Jewish, Christian, and Muslim Perspectives on Big Bang Cosmology
God, Humanity and the Cosmos: A Textbook in Science and Religion
God the Spirit - and Natural Science
Historical Examples of the Science and Religion Debate (GHC)
History of Creationism
Intelligent Design Coming Clean

Issues for the Millennium: Cloning and Genetic Technologies
Jean Vanier of L'Arche
Nano-Technology and Nano-ethics
Natural Science and Christian Theology - A Select Bibliography
Neuroscience and the Soul
Outlines of the Science and Religion Debate (GHC)

Perspectives on Evolution

Physics and Theology
Quantum Mechanics and Theology (GHC)
Questions that Shape Our Future
Reductionism (GHC)
Reintroducing Teleology Into Science
Science and Suffering

Scientific Perspectives on Divine Action (CTNS/Vatican Series)

Space Exploration and Positive Stewardship

Stem-Cell Debate: Ethical Questions
Stem-Cell Ethics: A Theological Brief

Stem-Cell Questions
Theistic Evolution: A Christian Alternative to Atheism, Creationism, and Intelligent Design...
Theology and Science: Current Issues and Future Directions
Unscientific America: How science illiteracy threatens our future
Will ET End Religion?

Current Stats: topics: >2600, links: >300,000, video: 200 hours.