What is Intelligent Design
Intelligent Design (ID) is a theory based on empirical observations. It posits that it is possible to infer to purposeful design of an object post factum if certain conditions are satisfied. These conditions include:
- High enough Kolmogorov complexity of the object's description using a given alphabet and a given universal description language, and
- An independently given specification the object complies with (notably, functional specification).
The high complexity level and specification serve to exclude the possibility of the object being a result of chance and necessity (or their combinations) on the gamut of a given reference system (such as the Earth, the Solar system or the universe) [Dembski 2007]. Objects amenable to design inference are said to bear sufficient amounts of specified complex information (SCI). There are various metrics available to measure this information [UncommonDescent].
Note that while it is not possible to infer to design of an object whose description is of low complexity, it does not necessarily mean it has not been designed. In other words, the complexity and specification are sufficient conditions for design inference but not necessary.
Note that while it is not possible to infer to design of an object whose description is of low complexity, it does not necessarily mean it has not been designed. In other words, the complexity and specification are sufficient conditions for design inference but not necessary.
Some ID Links and Names
- The main ID blog: www.uncommondescent.com
- Evolution News and Views: http://www.evolutionnews.org/
- Discovery Institute: http://www.discovery.org/
- Biologic Institute: http://www.biologicinstitute.org/
- The evolutionary informatics lab: http://evoinfo.org/
- Bio-Complexity Journal: http://bio-complexity.org/ojs/index.php/main
Some of the leading ID theorists are William Dembski, Michael Behe, Stephen Meyer, Jonathan Wells, David
Abel and Douglas Axe. Michael Denton, Paul
Davies, Fred Hoyle, Steve Fuller, David Berlinski and Leslie Orgel can,
undoubtedly, be counted as proponents of telic ideas in the large or design in particular. Interested readers are invited to search for their books or publications in refereed journals.
ID and the World of Refereed Publications
Quite often ID is criticised for not being scientific. The critics say, okay well books are alright, but you have to publish in
peer-reviewed media, which involves a good deal of scientific scrutiny. Here is a list of peer-reviewed ID publications: http://www.discovery.org/a/2640.
Where To Start
The
UncommonDescent blog has a FAQ page and a series of posts titled as ID Foundations which I recommend to start from if
you are interested.
The Gist of ID
1. Plausibility problems of abiogenesis and macroevolution.
The essential ID claims are based on statistics. The main
argument is that for anything functional you have to set multiple
independent system parameters to specific "function-friendly" values (Figure 1). In a majority
of cases, the functional values are located in small target zones. Consider a car engine, for instance.
It's got a standard range of temperatures, pressures, tolerances in the
carburetor, the ignition sequence and timings as well as many other parameters. They all need to be set appropriately so that
the engine functions correctly. A haphazard combination of values just won't
work in an overwhelming majority of samples. Obviously, the more independent parameters need to be set, the less the probability of a successful haphazard set of values becomes. At some point, it will become operationally zero.
Clearly, the number of parameter combinations is subject to
a combinatorial explosion, so statistically on the gamut of the observed universe, without intelligence there is simply 0 chance of something
functionally complex organised
spontaneously.
In practice, complex systems often exhibit what is known as irreducible complexity whereby each of a subset of their components is indispensible in the sense of contributing to a joint function [Behe 1994, Behe 2007]. As soon as at least one of the components fails, the original function is compromised/changed. Irreducible complexity relates to the concept of maximal independent sets in mathematics. Some examples include systems utilizing certain chemical reactions such as autocatalysis, systems whose functioning is based on resonanse, etc. Leaving the discussion about the origins of irreducibly complex biosystems aside, it can be demonstrated that biological systems as they are today have an irreducibly complex functional core [UncommonDescent]. Indeed, very complex structures such as living cells need to make sure they have correctly co-working subsystems of metabolism, replication, reaction to stimuli etc. at the start of life, where Darwinian gradualism does not work!
We can show by means of simple calculations that such an imaginary event as self-assembly of the proto-cell cannot plausibly have occurred in the entire history of the world [Abel 2009]. To do this, we can determine an upper bound on the number of events such as physico-chemical interactions on the Earth as follows. Knowing the highest rate R of chemical reactions (of the order 10-13 s), the age A of the Earth (~1017 s) and the number N of molecules on the Earth (~1040), we have:
- Specifications necessary for design inference determine the shape and size of parameter target zones. Clearly, the zones must be sufficiently small for credible empirical inference to conscious design.
Figure 1. The area of functional parameter value combinations for an abstract system that has three parameters: P1, P2 and P3. The functional parameter value ranges are shown in green. |
In practice, complex systems often exhibit what is known as irreducible complexity whereby each of a subset of their components is indispensible in the sense of contributing to a joint function [Behe 1994, Behe 2007]. As soon as at least one of the components fails, the original function is compromised/changed. Irreducible complexity relates to the concept of maximal independent sets in mathematics. Some examples include systems utilizing certain chemical reactions such as autocatalysis, systems whose functioning is based on resonanse, etc. Leaving the discussion about the origins of irreducibly complex biosystems aside, it can be demonstrated that biological systems as they are today have an irreducibly complex functional core [UncommonDescent]. Indeed, very complex structures such as living cells need to make sure they have correctly co-working subsystems of metabolism, replication, reaction to stimuli etc. at the start of life, where Darwinian gradualism does not work!
We can show by means of simple calculations that such an imaginary event as self-assembly of the proto-cell cannot plausibly have occurred in the entire history of the world [Abel 2009]. To do this, we can determine an upper bound on the number of events such as physico-chemical interactions on the Earth as follows. Knowing the highest rate R of chemical reactions (of the order 10-13 s), the age A of the Earth (~1017 s) and the number N of molecules on the Earth (~1040), we have:
Nmax = A * N / R = 1017 s * 1040 / 10-13 s = 1070.
a very liberal estimate of the max possible number of physico-chemical interactions that can have occurred. 1/Nmax gives us what is called the universal plausibility metric for the respective reference system (in this case, our planet).
A practical optimistic information threshold for descriptions in binary form is 500 bits, which helps us encode 3.27 * 10150 various states/configurations of our complex system. An example of such a system is a protein molecule (Figure 2) which has multiple amino acid residues: from 150 to a few thousand per domain (its structural/functional unit). So various configurations of the system will correspond to various residue sequences in a protein domain.
As an illustration, independently from the plausibility threshold calculations, biologists have recently found that a functional sequence of amino acids in a protein occurs once in every 1077 sequences on average [Axe 2004]. So this is implausible without design on the gamut of terrestrial interactions!
The practical bound of 500 bits (10150 states) is a liberal threshold. Indeed, using the same chain of reasoning as in the case of terrestrial intercations above, we can establish that on the gamut of the Solar system there can be realised at most 10102 quantum Planck states. So our bound is 48 orders of magnitude more than that, which is liberal enough. However, in terms of human language texts, 500 bits correspond to just 60 odd characters. This demonstrates the implausibility of spontaneous emergence of meaningful texts longer than that (e.g. the text you are reading now)! Note that by this logic, the plausibility of gradual accumulation of information is also taken into consideration and consequently ruled out.
Existing simplistic models of macroevolution such as [MacKay 2003] do not take into cosideration the fact that due to chaos there may not be a Darwinian selectable path from one taxon to another (see here). Consequently, in practice the rate of information accumulation in biosystems acted upon by mutations, recombination, drift and natural selection can be much lower than in theory. E.g. according to [MacKay 2003], the amount of information passed from parents to children via genetic recombination induced by sexual reproduction with some favourable assumptions is of the order √G , where G is the size of genome in bits. In my opinion, the emergence of sexual reproduction itself as well as many other mechanisms used in biosystems and usually attributed to evolution, cannot be adequately explained by the evolutionary phenomena alone.
A practical optimistic information threshold for descriptions in binary form is 500 bits, which helps us encode 3.27 * 10150 various states/configurations of our complex system. An example of such a system is a protein molecule (Figure 2) which has multiple amino acid residues: from 150 to a few thousand per domain (its structural/functional unit). So various configurations of the system will correspond to various residue sequences in a protein domain.
Figure 2. DNA-binding domain and a portion of a DNA molecule. Schematic representations of molecular surfaces (top) and of a tertiary structure of the complex (bottom). Source: Wikipedia, Transcription factors (in Russian). |
As an illustration, independently from the plausibility threshold calculations, biologists have recently found that a functional sequence of amino acids in a protein occurs once in every 1077 sequences on average [Axe 2004]. So this is implausible without design on the gamut of terrestrial interactions!
The practical bound of 500 bits (10150 states) is a liberal threshold. Indeed, using the same chain of reasoning as in the case of terrestrial intercations above, we can establish that on the gamut of the Solar system there can be realised at most 10102 quantum Planck states. So our bound is 48 orders of magnitude more than that, which is liberal enough. However, in terms of human language texts, 500 bits correspond to just 60 odd characters. This demonstrates the implausibility of spontaneous emergence of meaningful texts longer than that (e.g. the text you are reading now)! Note that by this logic, the plausibility of gradual accumulation of information is also taken into consideration and consequently ruled out.
Existing simplistic models of macroevolution such as [MacKay 2003] do not take into cosideration the fact that due to chaos there may not be a Darwinian selectable path from one taxon to another (see here). Consequently, in practice the rate of information accumulation in biosystems acted upon by mutations, recombination, drift and natural selection can be much lower than in theory. E.g. according to [MacKay 2003], the amount of information passed from parents to children via genetic recombination induced by sexual reproduction with some favourable assumptions is of the order √G , where G is the size of genome in bits. In my opinion, the emergence of sexual reproduction itself as well as many other mechanisms used in biosystems and usually attributed to evolution, cannot be adequately explained by the evolutionary phenomena alone.
[Durston et al. 2007] deals with a very important question about the plausible source of functionally specified information in biosystems. Functionally specified information carried by genetic prescriptions is associated with the functionality of protein molecules. By definition, the amount of functionally specified information is proportional to the ratio of the number of amino acid sequences coding for a given function to the number of all possible sequences of a given length or less (for more details, see here). The statistical analysis presented by [Durston et al. 2007] leads to the following considerations:
- Protein domain functionality coded by genetic prescriptions cannot be plausibly a result of spontaneous or law-like factors which are nothing more than physico-chemical constraints. The translated amino acid sequences code for control and consequently are choice contingent. The semantic cargo of genetic instructions statistically rules out law-like necessity and chance contingency.
- Biological function is deeply isolated in the phase space, which in light of the above plausibility considerations statistically rules out successful Darwinian search or blind search.
- The fitness function in a Darwinian scenario must contain sufficient amount f functional information to guide the search towards areas with more solutions. Otherwise search becomes blind and even less chance to encounter solutions (a genome must be able to code for a minimum of 382 proteins). On the other hand, the only source of functional information that is known to be statistically plausible is intelligence.
Consequently, unguided formation of different genera, classes or phyla appears extremely unlikely given the terrestrial
probabilistic resources. Of course, these observations do not rule out
evolution as such but they strictly limit its effects to microevolution
(maybe to within genera or species). This strongly suggests that there
is no single tree of life, but a forest of phylogenetic trees. It is a
future task of experimental bioinformatics to establish the number and
shape of each tree in practice. Based on [Durston et al. 2007], we conjecture that significant amounts of functional information are needed not only to generate functional proteins but also to generate genomes of higher taxa from those of lower.
- See a presentation on ID by Kirk Durston here, which I highly recommend.
2. Cybernetic problems of the genesis of complex functional systems.
From a different angle, functionality
itself points to purposeful conscious design because nature does not
care about functionality [Abel 2011]. It only provides constraints. In contrast, controls/semantics/functionality in practice are always superimposed on top of physicality by a conscious agent/decision maker. E.g. in the TCP/IP stack of protocols the semantics of information transfer is determined at the application level, it is then passed down to the physical level whereby it is transferred as a sequence of voltage jumps. If you like, nature acts only on the physical level, the remaining levels of the stack being organised by conscious decision makers (for details, see my note here).
So whenever there is cybernetic control (defined as a means to drive a system towards utility of any kind), its existence itself is a reliable marker of purposeful design. Matter being inert to utility can generate
only redundant low-informational regularities such as dune patterns,
crystals, wave interference, etc. (Figures 3-6). However, functional systems need
control to function.
Summing up, the presence of either of the following reliably points to design:
If it was not for a highly ideological bias in the scientific community of today, scientists would all agree that life appeared as a result of Creation. Whenever the issue of the origin of life is not in focus of the discussion, the evolutionist critics are happy to agree with ID argumentation. ID principles are used in forensics, sociology, medicine, etc without causing dispute.
References
Figure 3. A coffee foam pattern. |
Summing up, the presence of either of the following reliably points to design:
- Functionality
- Formalism
- Semiosis (use of a material symbol system and a mutually agreed protocol for information exchange between the sender and the receiver)
- Cybernetic control
- Prescriptive information (e.g. recipe-like instructions written using a designated alphabet, language and semantics found in DNA/RNA)
These things are all characteristic of living
organisms. We therefore conclude that life is an artefact. Our conclusion is drawn in compliance with the scientific method and is based entirely on observations of reality.
If it was not for a highly ideological bias in the scientific community of today, scientists would all agree that life appeared as a result of Creation. Whenever the issue of the origin of life is not in focus of the discussion, the evolutionist critics are happy to agree with ID argumentation. ID principles are used in forensics, sociology, medicine, etc without causing dispute.
- To have an idea of how heated the discussion is about whether ID is science and whether it is legitimate to teach ID in US educational institutions, watch Ben Stein's "Expelled: No Intelligence Allowed" (2008), available on YouTube in full.
Examples of Using ID logic in practice
Mass, energy, time and information
Finally, I would say a few words about intelligence. The development of science and technology over the XX century, in my opinion, forces us to recognise that information is not reducible to physical interactions that are routinely described in terms of mass/energy. Using a popular example due to Stephen Meyer, the information content of a newspaper article is not reducible to the particular arrangement of typographic paint on paper. True, we do not know what intelligence actually is. However, this is not a science stopper. In the same way, our ignorance about the nature of time or space does not stop us from formulating theories, constructing and using models involving those categories. ID posits that intelligence also cannot be reduced to mass/energy (cf. the insightful formulation of the Formalism > Physicality principle by David Abel [Abel 2011]). Nonetheless, the effects of an informational impact on a material system resulting in specific configurations of matter can under certain conditions be detected and quanitified.
Caveat for Orthodox Christian Readers
With all due respect to the ID masterminds and their scientific work, as an Orthodox Christian priest I should point out that ID movement is dominated by Protestant thought. So I think the scientific gist of ID should be distinguished from its philosophical framework.
Finally, I would say a few words about intelligence. The development of science and technology over the XX century, in my opinion, forces us to recognise that information is not reducible to physical interactions that are routinely described in terms of mass/energy. Using a popular example due to Stephen Meyer, the information content of a newspaper article is not reducible to the particular arrangement of typographic paint on paper. True, we do not know what intelligence actually is. However, this is not a science stopper. In the same way, our ignorance about the nature of time or space does not stop us from formulating theories, constructing and using models involving those categories. ID posits that intelligence also cannot be reduced to mass/energy (cf. the insightful formulation of the Formalism > Physicality principle by David Abel [Abel 2011]). Nonetheless, the effects of an informational impact on a material system resulting in specific configurations of matter can under certain conditions be detected and quanitified.
Caveat for Orthodox Christian Readers
With all due respect to the ID masterminds and their scientific work, as an Orthodox Christian priest I should point out that ID movement is dominated by Protestant thought. So I think the scientific gist of ID should be distinguished from its philosophical framework.
Some of My More Detailed Posts on ID
- ID, pro et contra: http://orthodoxchristian-blogger.blogspot.com/2011/11/intelligent-design-pro-et-contra.html
- Evolution, preadaptation and combinatorial search: http://orthodoxchristian-blogger.blogspot.com/2011/03/open-questions-in-biology-exaptation.html
- On "The First Gene" by David Abel:
- http://orthodoxchristian-blogger.blogspot.com/2012/01/on-david-abels-first-gene.html
- http://orthodoxchristian-blogger.blogspot.com/2012/01/on-hurdles-for-darwinian-macroevolution.html
- Why do we care about evolution and abiogenesis:http://orthodoxchristian-blogger.blogspot.com/2011/12/why-do-we-care-about-evolution-and.html
- ID destrawmannised: http://orthodoxchristian-blogger.blogspot.com/2011/12/gist-of-id-destrawmannised.html
References
- David L. Abel (2011), The First Gene: The Birth of Programming, Messaging and Formal Control, LongView Press Academic: Biolog. Res. Div.: New York, NY.
- David L. Abel (2009), The Universal Plausibility Metric (UPM) & Principle (UPP). Theoretical Biology and Medical Modelling, 6:27.
- Douglas Axe (2004), Estimating the Prevalence of Protein Sequences Adopting Functional Enzyme Folds, Journal of Molecular Biology,Volume 341, Issue 5, 27 August 2004, Pages 1295-1315.
- Michael Behe (1994), Darwin's Blackbox: The Biochemical Challenge to Evolution.
- Michael Behe (2007), The Edge of Evolution: The Search for the Limits of Darwinism.
- Richard Dawkins (1996), Climbing Mount Improbable.
- William Dembski (2007), No Free Lunch: Why Specified Complexity Cannot be Purchased without Intelligence, Rowman and Littlefield Publishers, 2007.
- Durston, K.K., D.K.Y. Chiu, D.L. Abel and J.T. Trevors (2007), Measuring the functional sequence complexity of proteins, Theoretical Biology and Medical Modelling 4:47. [doi:10.1186/1742-4682-4-47]
- David MacKay (2003), Information Theory, Inference, and Learning Algorithms. Cambridge University Press.
- UncommonDescent.org, ID Foundations.
No comments:
Post a Comment