Wednesday, 28 March 2012

On Evolutionistic Solipsism

Intelligent Design makes a straightforward and natural claim:
  • If an observed phenomenon is amenable to design inference (that is, if it is complex enough and functional), it is very likely that it was actually intelligently designed. In this case, conscious design is abductively the best explanation of its existence. Take for example the car engine. It has a high level of complexity. Its multiple parts are interrelated and tuned to function properly and it is characterised by formal utility as an integrated whole. We correctly assume that the engine is a product of design. Likewise, we observe formal utility in living organisms and infer to design by Intelligence.
Evolutionism castrates scientific thought and ascribes the generation of functional complexity to chance and necessity only. Consequently, it is a kind of solipsism.
  • Solipsism is a radical philosophical view that accepts one's own individual consciousness as the only thing sure to exist. It consequently denies the objective reality of the world [1].
As Richard Dawkins — perhaps one of the most implacable and aggressive neo-Darwinists today has it, we should think that design is only an apparent cause of life. Evolutionists personify natural selection by comparing it with a watch maker, sculptor, software engineer, poet, composer and architect [6]. However, they always make a mandatory reservation that in reality no such watch maker, sculptor or architect exists in person [5]. In their view, life — the most elaborate masterpiece of all arts — is but a consequence of blind chance and brutal force in varying combinations. A detailed plan of a multi-level building exists without the architect. Software exists without the programmer. Isn't it solipsism? 

As cyberneticist David Abel notes [4], the ability of mathematics — the archetype of formalism — to express the laws of nature tells us not only that human intelligence can formulate and work with powerful mathematical models, but also that our world, at least partially, allows formalisation. The fact that systematic observations point only to intelligence as the cause of formalism in this world suggests that our universe being amenable to formalisation is itself a product of Intelligence. In our egocentric solipsism – Abel continues – we cannot acknowledge this simple truth.

I can only add to this wonderful insight my personal observations. Many intellectuals in the West today are under a tremendous influence of Buddhism. It seems to me that what Abel pointed out is in fact one of the reasons why so many scientists are so entrapped. Buddhism essentially sees everything as an illusion and, by virtue of this, it is inherently solipsistic and thus it has a lot in common with evolutionism. 

I think we have to change the status quo in modern life sciences where agreement with philosophical commitments of a majority of scientists compromises the cause and the methodology of science. The roots of this phenomenon are in the unwillingness of the majority in the scientific community to accept the possibility of interpretations that are not in line with their materialistic way of thinking. This unwillingness, in turn, rests upon evolutionism which has been silently elevated to the rank of dogma, the only permitted mode of explanation. To get out of this dead end we need to rehabilitate:


  • teleology as a legitimate scientific and philosophical position (cf. teleonomy [2,3,7]);
  • and, in particular, Aristotelian choice contingency/purposeful design as an acceptable causal category.
Failure to do so will continue creating obstacles for unprejudiced thought in life sciences in future. 

References

  1. Wikipedia, Solipsism.
  2. Wikipedia, Teleonomy.
  3. Wikipedia, Теория П.К.Анохина функциональных систем (P.K.Anokhin's theory of functional systems, in Russian).
  4. D. Abel, The First Gene.
  5. R. Dawkins, The Blind Watchmaker. 
  6. W. Dembski, J. Wells, The Design of Life.
  7. Egnorance Blog, Teleonomy and Teleology.

Friday, 23 March 2012

Notes on the Margins

  • Teleology is as scientific as non-teleology:
    • The global teleological assumption that the universe has a purpose is as scientific as the contrary non-teleological assumption. That the former has been deprived of its right to be treated as scientific is wrong as far as philosophy of science is concerned. 
    • The status quo in science where materialism de facto enjoys a monopoly on interpretations of scientific findings is harmful to scientific enquiry in the long run.
  • Defending design/choice contingency as a factor of causality:
    • Without choice contingency in the set of causal factors (together with chance and necessity) it is impossible to develop an unbiased scientific understanding of intelligence, consciousness, life, and of the process of creation/functioning of complex artefacts.
    • The expulsion of choice contingency from the scientific method is explained by the said materialistic interpretational monopoly and violates the objectivity of scientific investigations.
  • "Correlation is not causation" [Dembski & Wells 2007]:
    • To say that genome homology proves common descent is logically wrong. It is just one possible interpretation of available data. Another possible interpretation is common design, although common design and common descent are not mutually exclusive. One can argue in favour or against particular interpretations (theories) but one cannot prove a theory. Common design was a very popular interpretation before Darwin. It is regaining its popularity now as more data becomes available.
  • "Science starts with figures" [Behe 1997], [Depew & Weber 2011]:
    • Neo-Darwinism has no underpinning mathematical core. I hazard a guess it will never do simply because it has a very limited predictive power (retrodicting the notorious missing links in geological strata appears inconclusive and unconvincing). On the other hand, mathematical formulas reflect the inherent formal laws underlying physical reality and, consequently, formulas themselves are a form of prediction. 
  • It is only choice contingency that has a genuine creative potential [Abel 2011]:
    • The entirety of scientific and technological experience of humanity suggests that a practically unlimited creativity potential which is usually wrongly attributed to evolution (i.e. to a combination of law-like necessity and chance) is only possible through a purposive choice of:
      • the initial conditions;
      • the architectural design and parameter tuning of the system's functional components;
      • the material symbol system in semiotic information transfer between the components of the system.
    • Law-like necessity and chance are not enough for an algorithm to function. The functioning of algorithms can only be made possible via choice contingency, a special causal factor.
  • Spontaneous generation of novelty per se is not observed [Ewert et. al. 2012]:
    • Genetic algorithms require careful parameter tuning before they can work. They work by merely rearranging various parts of a single whole. The intelligent power of GAs should not be overemphasised.
    • Any algorithm (and GAs in particular) is teleological because it has a goal state to reach. Inanimate matter is inert to goals and consequently it could not have possibly generated the genome. 
    • Self-tuning/learning algorithms, as well as any others, are intelligently created. The self-tuning and learning behaviour is but concentrated and formalised expertise of human professionals. 
  • A working algorithm without the conscious programmer is nonsense:
    • The possibility of there being algorithms capable of working with the genome points to intelligent agency behind that algorithm more reliably than it does to chance and/or necessity: in inanimate unconscious matter formalisms do not spontaneously arise. 
    • If it is impossible to have an algorithm without the programmer, it is ridiculous to suggest that error-correcting and noise-reducing code should come about by trial and error (the generation of novelty is attributed to errors during replication in neo-Darwinian models [Dawkins 1996]).
    • There is a general consensus around the fact that DNA/RNA is in fact code: it has a certain alphabet, syntax and semantics. Anyone who has done at least some programming will agree that adding new functionality is achieved not by mutating the existing code chaotically and relying on the customer to take it or leave it as is (similar to neo-Darwinian models where mutations and natural selection acting on genetic code are believed to lead to novelty). On the contrary, in practice new functionality appears in code as a result of the following formal process: formulation of requirements -> proof of concept -> implementation -> testing. A new module or data structure gets added to the existing code at once (not through symbol-by-symbol or line-by-line trial and error). It happens as a result of consious work of a team of experts. 
      • That novelty is a result of purely (huge) probabilistic resources available to evolution can no longer be accepted. The probabilistic resources (i.e. the total number of realisable configurations over a given amount of time) are in fact limited and this can be easily demonstrated. For the step-by-Darwinian-step accumulation of sufficient genetic information in order to get life going, the whole terrestrial lifespan is not enough by many orders of magnitude, even using very liberal estimates (details here).
      • Hypotheses of non-intelligent extraterrestrial origins of life (e.g. panspermia) in principle cannot resolve the issue of the initial conditions and complexity required for living organisms: contemporary estimates of the age of the universe and of the age of the Earth are of the same order of magnitude (1017 seconds).
  • Information exchange in semiotic systems (biosystems included) is impossible without preliminarily setting a common [Виолован 1997]:
    • Alphabet;
    • Syntax;
    • Protocol of information coding/decoding including an agreement about semantics between the sender and the receiver.
  • Life comes only from life (the main principle of vitalism):
    • We have absolutely no evidence of the emergence of life from non-life. Such a scenario is better suited for Sci-Fi rather than for science.
    • Abiogenesis was seriously put to doubt as a result of the work by Luis Pasteur, who in 1860-62 demonstrated the impossibility of self-organisation of micro-organisms.
  • The spontaneous generation of intelligence is also Sci-Fi rather than science:
    • The scientific and technological progress of humanity testifies to "the golden rule" that everything comes at a price. We pay a lot to get but a little. So machines will stay machines and will never be able to replace human intelligence entirely.
    • If that is true and the strong AI hypothesis does not hold, then human intelligence did not come about spontaneously either.
  • Non-function does not by itself become function. To claim the opposite would be equivalent to saying it is possible for a man to pull himself by the hair to get out of a swamp:
    • The spontaneous emergence of function out of non-functional components is not observed in nature. Here functionality means orientation to purpose, usefulness or utility of parts of a single whole [Abel 2011].
    • Inanimate nature is inert to purpose or function. It does not care whether things are functional or not [ibid].
      • In case someone wonders, preadaptation is a different story as it involves switching between functions in a given biological context rather than the emergence of function out of non-function.
  • The universe is fine-tuned for intelligent life (the strong anthropic principle, see [Behe et al. 2000]):
    • The multiverse hypothesis is non-scientific by definition since the scientific method assumes direct observation and analysis of things happening in our world. So an indirect hypothetical increase in probabilities of incredibly implausible events in this world as a result of the introduction of parallel universes does not count.
    • The weak anthropic principle attempts to dismiss the need to explain the fine-tuning of our universe by positing that "if man had not been around, there would not have been any observers of fine-tuning". This is not a way out either because correlation is not causation, as we pointed out earlier. The weak anthropic principle fails to show that the two following questions need not be answered:
      • Why is there an unbelievably lucky coincidence of universal constants?  and
      • Why do they need to be so subtly tuned?

Bibliography

  1. D. Abel (2011), The First Gene. 
  2. M. Behe (1997), Darwin's Blackbox.
  3. Michael J. Behe, William A. Dembski and Stephen Meyer (2000), Science and Evidence of Design in the Universe, Ignatius Press. 
  4. R. Dawkins (1996), The Selfish Gene.
  5. W. Dembski and J. Wells (2007), The Design of Life.
  6. David J. Depew and Bruce H. Weber (2011), The Fate of Darwinism: Evolution After the Modern Synthesis, Biological Theory, 2011, Volume 6, Number 1, Pages 89-102.
  7. W. Ewert, W. Dembski, R. Marks (2012), Climbing the Steiner Tree—Sources of Active Informationin in a Genetic Algorithm for Solving the Euclidean Steiner Tree Problem, Bio-Complexity Journal (open access).
  8. К. Виолован (1997), Проблемы абиогенеза как ключ к пониманию несостоятельности эволюционной гипотезы. Просветительский центр "Шестоднев" (in Russian).

Monday, 19 March 2012

In Defense of Vitalism

I believe ... in the Holy Spirit, the Lord, the Giver of Life...
from Nicene-Constantinopolitan Creed, AD 325-381.


St Great Martyr and Healer Panteleimon
(Byzantine icon, beginning of the 13th century)

Recent biochemical research demonstrates the implausibility of a spontaneous origin of life [Abel, Axe, Behe]. It is clear that biological systems exhibit irreducible complexity of their functional core [Behe] irrespective of the dispute about its origin. 

Evolutionists usually claim that the irreducible complexity of the functional core of biosystems that are available today, is not due to their being intelligently designed but may be a result of the initial spontaneous redundancy of proto-biosystems coupled with the gradualism of natural selection. However, this claim does not stand a complexity analysis of functional information necessary to organise and replicate biofunctional structures given the terrestrial probabilistic resources. Indeed, the hypothesis of the spontaneous formation of the irreducibly complex core of a proto-biosystem is credibly below the terrestrial plausibility threshold for any thinkable physico-chemical interaction driven by chance and necessity only [Abel]. Even more so are hypothetical redundantly complex precursors of contemporary irreducibly complex biosystems. Any suggested hypothetical evolutionary path to what we know today to be irreducibly complex involves multiple steps each of which is associated with an incredibly low probability. Liberal estimates of these probabilities are operationally zero [Behe]. 

On the other hand, contemporary self-organisation theories [Prigogine, Kauffman, Eigen] fail to acknowledge the simple fact that organisation from non-function to function has never been observed to emerge spontaneously - to say nothing of it being persistent - in contrast to the readily observable spontaneous generation of  low-informational regularity of matter.

When biochemical/genetic evidence is analysed objectively, it appears that: 
  • Life was started off via intelligent agency. 
  • Its functioning was engineered to be autonomous and persistent from the start by making sure the ensemble of its functional parameters was "tuned away" to a small isolated target zone to disallow uncontrolled dissipation of energy. This agrees with the objections of Ikeda and Jefferis to the naive hypothesis of fine-tuning of the universe. 
  • Life's built-in evolutionary tolerances allowing it to adapt to varying environments are quite tight in practice. Even if they were not, the spontaneous unguided formation of new taxa would require the gradual accumulation of new functional information which, in turn, would take orders of magnitude more time than what the currently accepted bounds on the age of the universe allow.
This brings us to the conclusion that life is special: it cannot be reduced to chemistry alone but requires the purposive execution of control over its initial parameter settings, recordation of these settings in genetic instructions and processing those instructions during replication. However, inanimate nature being inherently inert to control, is blind to the choice of means to pursue a goal, which is necessary to produce functional persistent systems.

Literature


1. David Abel, The First Gene.
2. Douglas Axe, Estimating the prevalence of protein sequences adopting functional enzyme folds.
3. Michael Behe, Darwin's Blackbox.
4. Michael Behe, The Edge of Evolution.
5. M. Eigen, P. Schuster, The Hypercycle: A principle of natural self-organization, Springer, Berlin, 1979.
6. I. Prigogine, I. Stangers, Order out of Chaos.
7. Stuart Kauffman, Origins of Order: Self-Organisation and Selection in Evolution.

Monday, 5 March 2012

ID in a Nutshell

What is Intelligent Design

Intelligent Design (ID) is a theory based on empirical observations. It posits that it is possible to infer to purposeful design of an object post factum if certain conditions are satisfied. These conditions include:
  1. High enough Kolmogorov complexity of the object's description using a given alphabet and a given universal description language, and  
  2. An independently given specification the object complies with (notably, functional specification). 
The main idea of design inference can also be formulated as follows: under the above conditions intelligent interference in creation/modification of observed patterns of matter is abductively the best (simplest and most reliable) explanation of the observation.   

The high complexity level and specification serve to exclude the possibility of the object being a result of chance and necessity (or their combinations) on the gamut of a given reference system (such as the Earth, the Solar system or the universe) [Dembski 2007]. Objects amenable to design inference are said to bear sufficient amounts of specified complex information (SCI). There are various metrics available to measure this information [UncommonDescent]. 

Note that while it is not possible to infer to design of an object whose description is of low complexity, it does not necessarily mean it has not been designed. In other words, the complexity and specification are sufficient conditions for design inference but not necessary.

Some ID Links and Names

  1. The main ID blog: www.uncommondescent.com 
  2. Evolution News and Views: http://www.evolutionnews.org/ 
  3. Discovery Institute: http://www.discovery.org/ 
  4. Biologic Institute: http://www.biologicinstitute.org/ 
  5. The evolutionary informatics lab: http://evoinfo.org/
  6. Bio-Complexity Journal: http://bio-complexity.org/ojs/index.php/main

Some of the leading ID theorists are William Dembski, Michael Behe, Stephen Meyer, Jonathan Wells, David Abel and Douglas Axe. Michael Denton, Paul Davies, Fred Hoyle, Steve Fuller, David Berlinski and Leslie Orgel can, undoubtedly, be counted as proponents of telic ideas in the large or design in particular. Interested readers are invited to search for their books or publications in refereed journals.

ID and the World of Refereed Publications

Quite often ID is criticised  for not being scientific. The critics say, okay well books are alright, but you have to publish in peer-reviewed media, which involves a good deal of scientific scrutiny. Here is a list of peer-reviewed ID publications: http://www.discovery.org/a/2640.

Where To Start

The UncommonDescent blog has a FAQ page and a series of posts titled as ID Foundations which I recommend to start from if you are interested. 

The Gist of ID

1. Plausibility problems of abiogenesis and macroevolution.

The essential ID claims are based on statistics. The main argument is that for anything functional you have to set multiple independent system parameters to specific "function-friendly" values (Figure 1). In a majority of cases, the functional values are located in small target zones. Consider a car engine, for instance. It's got a standard range of temperatures, pressures, tolerances in the carburetor, the ignition sequence and timings as well as many other parameters. They all need to be set appropriately so that the engine functions correctly. A haphazard combination of values just won't work in an overwhelming majority of samples. Obviously, the more independent parameters need to be set, the less the probability of a successful haphazard set of values becomes. At some point, it will become operationally zero.
  • Specifications necessary for design inference determine the shape and size of parameter target zones. Clearly, the zones must be sufficiently small for credible empirical inference to conscious design.


Figure 1. The area of functional parameter value combinations for an abstract system that has three parameters: P1, P2 and P3. The functional parameter value ranges are shown in green.
Clearly, the number of parameter combinations is subject to a combinatorial explosion, so statistically on the gamut of the observed universe, without intelligence there is simply 0 chance of something functionally complex organised spontaneously.

In practice, complex systems often exhibit what is known as irreducible complexity whereby each of a subset of their components is indispensible in the sense of contributing to a joint function [Behe 1994, Behe 2007]. As soon as at least one of the components fails, the original function is compromised/changed. Irreducible complexity relates to the concept of maximal independent sets in mathematics. Some examples include systems utilizing certain chemical reactions such as autocatalysis, systems whose functioning is based on resonanse, etc. Leaving the discussion about the origins of irreducibly complex biosystems aside, it can be demonstrated that biological systems as they are today have an irreducibly complex functional core [UncommonDescent]. Indeed, very complex structures such as living cells need to make sure they have correctly co-working subsystems of metabolism, replication, reaction to stimuli etc. at the start of life, where Darwinian gradualism does not work!

We can show by means of simple calculations that such an imaginary event as self-assembly of the proto-cell cannot plausibly have occurred in the entire history of the world [Abel 2009]. To do this, we can determine an upper bound on the number of events such as physico-chemical interactions on the Earth as follows. Knowing the highest rate R of chemical reactions (of the order 10-13 s), the age A of the Earth (~1017 s) and the number N of molecules on the Earth (~1040), we have:


Nmax = A * N / R = 1017 s * 1040 / 10-13 s = 1070.

a very liberal estimate of the max possible number of physico-chemical interactions that can have occurred. 1/Nmax gives us what is called the universal plausibility metric for the respective reference system (in this case, our planet).

A practical optimistic information threshold for descriptions in binary form is 500 bits, which helps us encode 3.27 * 10150 various states/configurations of our complex system. An example of such a system is a protein molecule (Figure 2) which has multiple amino acid residues: from 150 to a few thousand per domain (its structural/functional unit). So various configurations of the system will correspond to various residue sequences in a protein domain. 

Figure 2. DNA-binding domain and a portion of a DNA molecule. Schematic representations of molecular surfaces (top) and of a tertiary structure of the complex (bottom). Source: Wikipedia, Transcription factors (in Russian).


As an illustration, independently from the plausibility threshold calculations, biologists have recently found that a functional sequence of amino acids in a protein occurs once in every 1077 sequences on average [Axe 2004]. So this is implausible without design on the gamut of terrestrial interactions!


The practical bound of 500 bits (10150 states) is a liberal threshold. Indeed, using the same chain of reasoning as in the case of terrestrial intercations above, we can establish that on the gamut of the Solar system there can be realised at most 10102 quantum Planck states. So our bound is 48 orders of magnitude more than that, which is liberal enough. However, in terms of human language texts, 500 bits correspond to just 60 odd characters. This demonstrates the implausibility of spontaneous emergence of meaningful texts longer than that (e.g. the text you are reading now)! Note that by this logic, the plausibility of gradual accumulation of information is also taken into consideration and consequently ruled out.

Existing simplistic models of macroevolution such as [MacKay 2003] do not take into cosideration the fact that due to chaos there may not be a Darwinian selectable path from one taxon to another (see here). Consequently, in practice the rate of information accumulation in biosystems acted upon by mutations, recombination, drift and natural selection can be much lower than in theory. E.g. according to [MacKay 2003], the amount of information passed from parents to children via genetic recombination induced by sexual reproduction with some favourable assumptions is of the order G , where G is the size of genome in bits. In my opinion, the emergence of sexual reproduction itself as well as many other mechanisms used in biosystems and usually attributed to evolution, cannot be adequately explained by the evolutionary phenomena alone.

[Durston et al. 2007] deals with a very important question about the plausible source of functionally specified information in biosystems. Functionally specified information carried by genetic prescriptions is associated with the functionality of protein molecules. By definition, the amount of functionally specified information is proportional to the ratio of the number of amino acid sequences coding for a given function to the number of all possible sequences of a given length or less (for more details, see here). The statistical analysis presented by [Durston et al. 2007] leads to the following considerations:
  • Protein domain functionality coded by genetic prescriptions cannot be plausibly a result of spontaneous or law-like factors which are nothing more than physico-chemical constraints. The translated amino acid sequences code for control and consequently are choice contingent. The semantic cargo of genetic instructions statistically rules out law-like necessity and chance contingency.
  • Biological function is deeply isolated in the phase space, which in light of the above plausibility considerations statistically rules out successful Darwinian search or blind search.
  • The fitness function in a Darwinian scenario must contain sufficient amount f functional information to guide the search towards areas with more solutions. Otherwise search becomes blind and even less chance to encounter solutions (a genome must be able to code for a minimum of 382 proteins). On the other hand, the only source of functional information that is known to be statistically plausible is intelligence.

Consequently, unguided formation of different genera, classes or phyla appears extremely unlikely given the terrestrial probabilistic resources. Of course, these observations do not rule out evolution as such but they strictly limit its effects to microevolution (maybe to within genera or species). This strongly suggests that there is no single tree of life, but a forest of phylogenetic trees. It is a future task of experimental bioinformatics to establish the number and shape of each tree in practice. Based on [Durston et al. 2007], we conjecture that significant amounts of functional information are needed not only to generate functional proteins but also to generate genomes of higher taxa from those of lower.

  • See a presentation on ID by Kirk Durston here, which I highly recommend.

2. Cybernetic problems of the genesis of complex functional systems.

From a different angle, functionality itself points to purposeful conscious design because nature does not care about functionality [Abel 2011]. It only provides constraints. In contrast, controls/semantics/functionality in practice are always superimposed on top of physicality by a conscious agent/decision maker. E.g. in the TCP/IP stack of protocols the semantics of information transfer is determined at the application level, it is then passed down to the physical level whereby it is transferred as a sequence of voltage jumps. If you like, nature acts only on the physical level, the remaining levels of the stack being organised by conscious decision makers (for details, see my note here).

So whenever there is cybernetic control (defined as a means to drive a system towards utility of any kind), its existence itself is a reliable marker of purposeful design. Matter being inert to utility can generate only redundant low-informational regularities such as dune patterns, crystals, wave interference, etc. (Figures 3-6). However, functional systems need control to function. 


Figure 3. A coffee foam pattern.
 
Figure 4. Dune patterns.
Figure 5. Crystals of water on the grass. 


Figure 6. Sine wave interference. 


Summing up, the presence of either of the following reliably points to design:
  • Functionality
  • Formalism
  • Semiosis (use of a material symbol system and a mutually agreed protocol for information exchange between the sender and the receiver)
  • Cybernetic control
  • Prescriptive information (e.g. recipe-like instructions written using a designated alphabet, language and semantics found in DNA/RNA)
These things are all characteristic of living organisms. We therefore conclude that life is an artefact. Our conclusion is drawn in compliance with the scientific method and is based entirely on observations of reality.

If it was not for a highly ideological bias in the scientific community of today, scientists would all agree that life appeared as a result of Creation. Whenever the issue of the origin of life is not in focus of the discussion, the evolutionist critics are happy to agree with ID argumentation. ID principles are used in forensics, sociology, medicine, etc without causing dispute.
  • To have an idea of how heated the discussion is about whether ID is science and whether it is legitimate to teach ID in US educational institutions, watch Ben Stein's "Expelled: No Intelligence Allowed" (2008), available on YouTube in full.
Examples of Using ID logic in practice

For examples of how ID can be used in practice please see my note here.
Mass, energy, time and information

Finally, I would say a few words about intelligence. The development of science and technology over the XX century, in my opinion, forces us to recognise that information is not reducible to physical interactions that are routinely described in terms of mass/energy. Using a popular example due to Stephen Meyer, the information content of a newspaper article is not reducible to the particular arrangement of typographic paint on paper. True, we do not know what intelligence actually is. However, this is not a science stopper. In the same way, our ignorance about the nature of time or space does not stop us from formulating theories, constructing and using models involving those categories. ID posits that intelligence also cannot be reduced to mass/energy (cf. the insightful formulation of the Formalism > Physicality principle by David Abel [Abel 2011]). Nonetheless, the effects of an informational impact on a material system resulting in specific configurations of matter can under certain conditions be detected and quanitified. 

Caveat for Orthodox Christian Readers

With all due respect to the ID masterminds and their scientific work, as an Orthodox Christian priest I should point out that ID movement is dominated by Protestant thought. So I think the scientific gist of ID should be distinguished from its philosophical framework.

Some of My More Detailed Posts on ID


References
    1. David L. Abel (2011), The First Gene: The Birth of Programming, Messaging and Formal Control, LongView Press Academic: Biolog. Res. Div.: New York, NY.
    2. David L. Abel (2009),  The Universal Plausibility Metric (UPM) & Principle (UPP). Theoretical Biology and Medical Modelling, 6:27.
    3. Douglas Axe (2004), Estimating the Prevalence of Protein Sequences Adopting Functional Enzyme Folds, Journal of Molecular Biology,Volume 341, Issue 5, 27 August 2004, Pages 1295-1315.
    4. Michael Behe (1994), Darwin's Blackbox: The Biochemical Challenge to Evolution.
    5. Michael Behe (2007), The Edge of Evolution: The Search for the Limits of Darwinism.
    6. Richard Dawkins (1996), Climbing Mount Improbable.
    7. William Dembski (2007), No Free Lunch: Why Specified Complexity Cannot be Purchased without Intelligence, Rowman and Littlefield Publishers, 2007.
    8. Durston, K.K., D.K.Y. Chiu, D.L. Abel and J.T. Trevors (2007), Measuring the functional sequence complexity of proteins, Theoretical Biology and Medical Modelling 4:47. [doi:10.1186/1742-4682-4-47]
    9. David MacKay (2003), Information Theory, Inference, and Learning Algorithms. Cambridge University Press.
    10. UncommonDescent.org, ID Foundations.