Monday, 11 June 2012

Biofunctionality: A continent or an archipelago?

Evolutionist: Okay, you have persuaded me that in order for life to kick off, intelligence is required. I accept ID argumentation regarding the necessary minimum complexity of a protobiotic cell. However, I think that unguided evolution adequately explains the observed biological diversity by natural selection over random variation given a self-replicating biosystem to start with. There is no epistemic need to invoke the interference of intelligent actors beyond the start of life to account for the biodiversity we see. Otherwise, it would have  contradicted Occam's rule.

ID proponent: The argumentation of design inference relating to the start of life stays the same in relation to biological evolution that is supposedly responsible for the diversity of biological forms descending from the same ancestor. Qualitative and quantitative differences in the genomes of, say, prokaryotes and eukaryotes (e.g. bacteria, plants and humans) or humans and microorganisms are so big that biological common ancestry in the context of mutations, drift and natural selection is precluded as statistically implausible on the gamut of the terrestrial physico-chemical processes taking into consideration the rates of these processes and a possible realistic maximum number of them over the entire natural history [Abel 2009]. 

True, genomic adaptations do occur and can be observed. However, this does not solve the problem of functionality isolation: complex functionality even in highly adaptable biosystems is necessarily highly tuned so represents islands of function in configuration space. The theory of global common descent posits that this problem does not exist and that all existing or extinct bioforms are nodes of a single phylogenetic graph. However, careful consideration of functionality isolatedness lends grounds for doubt about the tenability of global common descent.

Organisms are complex self-replicating machines which consist of many interrelated subsystems. As machines, biosystems function within certain ranges of their many parameters. The parameter values in a majority of practical cases of complex machines belong to relatively small isolated areas in the configuration space. So even considering the high level of adaptability of living systems, in practice significant functional changes leading to substantial phenotypic differences require deep synchronised reorganisations of all relevant subsystems. Gradualist models similar to [Dawkins 1996] using only random and law-like factors of causation (i.e. random changes in genotype coupled with unguided selection) cannot satisfactorily account for such complex synchronised changes in the general case given the terrestrial probabilistic bounds on physico-chemical interactions. Consequently, the problem of functionality islands vs functionality continent still needs to be addressed.

The main claim of the empirical theory of design inference is that in practice a big enough quantity of functionally specified information (FSI, [Abel 2011]) cannot plausibly emerge spontaneously or gradualistically but statistically strongly point to choice contingent causality (design). Substantially different morphologies (body plans) are explained by substantial differences in the associated FSI.

It is true that in principle it is possible to gain automatically some uncertainty reduction as a result of information transfer over a Shennon channel. E.g. it can happen as a result of a random error correction such as we see in biosystems, but it always happens given an initial and long enough meaningful message, which statistically rules out combinations of chance and law-like necessity.

Nevertheless, information transfer results not only in uncertainty reduction. The problem is that information always carries a semantic cargo in a given context, which is something Shennon's theory does not address. By the way, Shennon himself did not want to label his theory as theory of information for this reason [ibid].

The principal problem of the theory of evolution is the absolute absence of empirical evidence of spontaneous (i.e. chance contingent and/or necessary) generation of semantics in nature. We do not see in practice a long enough meaningful message automatically generated without recourse to intelligence. Nor do we ever see a spontaneous drastic change of semantics given a meaningful initial message. A drastic change of rules governing the behaviour of a system is only possible given specialised initial conditions (intelligent actors and meta-rules prescribing such a change). How then is it possible by way of unguided variations coupled with law-like necessity?


What we can see is small adaptational variations of the already available genetic instructions as allowed by the existing protocols in biosystems. By the way, the known existing genetic algorithms are no refutation of our thesis. This is simply because any such algorithm is (or can be cast to) a formal procedure whereby a function is optimised in a given context. Consequently, any genetic algorithm guides the system towards a goal state and is therefore teleological. Contemporary science knows no ways of spontaneous emergence of formalism out of chaos.

We believe that it is the semantic load that determines the hypothetical passage of a biosystem between functional islands in the configuration space. If this is true, then each such semantic transfer needs purposive intelligent interference. Interestingly, Darwin himself spoke about the descent of all biological forms from one or several common ancestors [Darwin 1859]. If we interpret his words in the sense of a phylogenetic forest (or more precisely - considering lateral genetic transfer - a system of connected graphs with a low percentage of cycles), we can agree with Darwin on condition that each phylogenetic tree determines a separate unique body plan (fig.1). 



Figure 1. A comparative analysis of two hypotheses: common descent vs. common design/implementation. Note that common design/implementation does not exclude local common descent (click to enlarge).


Data available today corresponds much better to the hypothesis of common design/implementation than to the hypothesis of global common descent
  • Firstly, what was thought to be genetic "junk" (i.e. non-protein coding DNA, which constitutes a majority of the genome) and seen as evidence of unguided evolution at work, is at least partly no junk at all but rather carries regulatory functions in processing genetic information [Wells 2011]. 
  • Secondly, the universal homology of the genome can now be interpreted as a consequence of not only common ancestry but also common design/implementation and code reuse in diffent contexts in much the same way as we copy&paste or cut&paste pieces of textual information. Probably the best analogy in this sense could be the process of software creation whereby novelty arises not as a result of random variations on a per-symbol basis which is then fixed by selection based on "whether it works". Clearly, a majority of such mutations will undoubtedly be deleterious to the functioning of the code as a single integrated whole. On the contrary, novelty in software comes about as a result of well thought through functional unit-wise discrete code modifications. 
  • Thirdly, common design is indirectly supported by the sheer absence of examples of spontaneous generation of formalism, functionality, semantics or control (i.e. cybernetic characteristics of biosystems) in non-living nature.

Finally, as regards Occam's rule we need to say the following. According to Einstein's apt phrase, one must make everything as simple as possible, but not simpler. A crude over-simplification which does not take into account some crucial sides of the case in point can never be considered even correct enough, to say nothing of being more parsimonious than complex and more detailed models. Various theories of (unguided) evolution miss out on the fact that microadaptations cannot be extrapolated to explain the body plan level differences. This is because to successfully modify morphology one needs a deep synchronised reorganisation of a whole number of interdependent functional subsystems of a living organism where gradualism does not stand the statistical and information-theoretic challenge. The complexity of such reorganisations grows with the complexity of the biological organisation. On the other hand, the capabilities of functional preadaptations are limited in practice.

That said, even if we do not take into consideration the above complexity and synchronisation challenge, the principle untenability of macroevolution (i.e. untenability of theories where evolution is viewed as the exclusive source of genuine novelty) is in statistical implausibility of emergence hypotheses which assume gradual "crystallisation" of cybernetic control, semantics and formalism in biosystems [Abel 2011].

Sources


  1. David L. Abel (2009),  The Universal Plausibility Metric (UPM) & Principle (UPP). Theoretical Biology and Medical Modelling, 6:27.
  2. David Abel (2011), The First Gene.
  3. Charles Darwin (1859), On the Origin of Species
  4. Jonathan Wells (2011), The Myth of Junk DNA.
  5. Blog and forum UncommonDescent.com

Tuesday, 15 May 2012

Are Bird Feathers and Avian Respiration Examples of Irreducibly Complex Systems?

[McIntosh 2009] considers birds' plumage and breathing apparatus as examples of irreducibly complex systems in the sense of Behe [Behe 1996]. Recall that an irreducibly complex system is a functional system which depends on a minimum of components each of which contributes to a function: remove any one or more components and the functionality is lost (also see my note here).

McIntosh argues that organisms, as complex cybernetic nano-machines, could not have possibly been self-organised bottom-up using the Darwinian mechanism which employs unguided natural selection over mutations resulting in multiple intermediate organisms. In such systems as bird plumage, respiration and many others, the removal of any component in practice is either unable to lead to a selectable advantage or lethal.

According to McIntosh, both the plumage and respiratory system

... are examples of the principle of specified functional complexity, which occurs throughout nature. There is no known recorded example of this developing experimentally where the precursor information or machinery is not already present in embryonic form. Such design features indicate non-evolutionary features being involved.

Fig. 1. Schematic of exaptation. Components that were part of irreducibly complex systems in ancestral forms (functions F1 and F2) are included (exapted) in an irreducibly complex system of a descendant form (function F3) as a result of a functional switch.

I would like to comment on this. First, despite the possibility of functional exaptation (fig.1) which leads to a functional switch (e.g. according to a popular evolutionary hypothesis, plumage first served for thermoregulation only, not for flight), in practical situations the probability of multiple coordinated functional switches is, as a rule, vanishingly small. On the other hand, in practice a gradualist sequence of functional switches appears to be implausible given the current estimates of available terrestrial probabilistic resources.

Furthermore, when analysing hypothetical phylogenetic trajectories of biosystems in the configuration space, it is important to realise that terminal points of these trajectories may belong to different and sometimes quite distant from each other attraction basins, in which case the trajectories themselves cross chaos incompatible with life. Also, the capability of functional switching itself is guaranteed by the semiotic processes in living organisms. Consequently, exaptation in practice always takes place in a given biological context and is limited by the capabilities of the built-in information processing systems (see my note here).

In addition, the macroevolutionary scenarios of what might have occurred are too vague in the face of experimental data [Behe 1996]. At the same time, fossil evidence is more against macroevolution than for it: while intermediate forms must really be numerous to give plausible grounds to macroevolution, the available fossils are too scanty and inconclusive [Dembski & Wells 2007].


Of course, macroevolutionary scenarios as explanations are usually on the favourable side (if you do not delve into detail) since it is extremely hard to exclude something as a possibility, which is exactly what the argumentation behind irreducible complexity is aiming to achieve. Indeed the burden of proof is always on the party claiming a theoretical or practical impossibility. This is why in the Roman law innocense is presumed: it is much easier to prove that a suspect could commit a crime rather than could not [Gene 2007].


Anyway, as more experimental data is being collected, teleology as a principle behind the functioning of the biosphere as a whole or individual organisms is becoming more convincing, while the case of unguided evolution is getting weaker. This said, the question of teleology is of course demarcational and therefore is rather philosophical.

Other papers by McIntosh are reviewed here.

References

  1. M. Behe (1996), Darwin's Blackbox.
  2. W. Dembski, J. Wells (2007), The Design of Life.
  3. M. Gene(2007), The Design Matrix.
  4. А.С. MacIntosh (2009), Evidence of Design in Bird Feathers and Avian Respiration, Intern. Jour. of Design & Nature and Ecodynamics, 2(4), pp.154-169. 
  5. Wikipedia, Demarcation problem.

Friday, 27 April 2012

Examples of using ID in practice

The statistical theory of Intelligent Design posits the practical possibility of inferring to purposeful intelligent activity towards changing configuration patterns of an arbitrary complex system, given the results of an analysis of these configuration patterns and an independent specification. In other words, based on compelling empirical evidence, ID states that under certain conditions inference to intelligent agency is the best explanation of certain traits.


Design inference is possible given functionally complex patterns of matter that comply with an independently provided specification. However, when intelligent actors generate simple patterns, it is not possible to infer to design using this theory without additional information.


We use the basic idea of Intelligent Design, i.e. the practical possibility of design inference, in many areas of our daily life. In Fig.1 we present an incomplete list of examples of this.

Fig.1. Examples of using intelligence involvement detection in practice.

Saturday, 14 April 2012

A Quote from Darwin

If I saw an angel come down to teach us good, and I was convinced from others seeing him that I was not mad, I should believe in design. If I could be convinced thoroughly that life and mind was in an unknown way a function of other imponderable force, I should be convinced. If man was made of brass or iron and no way connected with any other organism which had ever lived, I should perhaps be convinced. But this is childish writing (citation based on "The Design Matrix" by Mike Gene, emphasis added).
This is a quote from a letter by Charles Darwin to Asa Gray, a professor of botany at Harvard. Darwin wrote it in response to Gray's question, what it would take to convince him that life was designed. I think that Darwin's position here is profoundly unscientific. Yes, science encourages skepticism, but doubt is a double-edged sword. A scientist should always remain, to a reasonable extent, skeptical of his own views. A scientist should take pains to analyse all available data fairly and be prepared to follow evidence wherever it leads. 

Now that mathematics has established the existence of empirical limits of reason, such determination against the mysterious and non-formalisable should itself, dialectically, raise scientific doubts. On the other hand, the radicalism of bias is always dangerous as it can make us blind to something essential:  

If they hear not Moses and the prophets, neither will they be persuaded, though one rose from the dead (Luke 16:31).

References

1. M. Gene, The Design Matrix, 2007 (see also the eponymous blog).
2. M. Behe & J. Wells, Then and Only Then.

Wednesday, 4 April 2012

Towards Defining Intelligence

Over the recent months during Intelligence Design related discussions at UncommonDescent.com I heard many naturalistically-minded colleagues say that we, ID proponents, cannot adequately define intelligence.

I think that as far as cybernetics is concerned intelligence can be defined as anything that is able to impart functionality to systems. This phenomenological definition is analogous to the definition of gravity in Newton's mechanics as a natural phenomenon by which physical bodies attract with a pull proportional to their mass. Intelligence is the only means of creating functionally complex configurations/patterns of matter that we know of in practice. Figuratively speaking, intelligent agency leaves a trace of specified complexity behind, similarly to ants communicating by a feromone trail. Using the above definition we can get away from the curse of infinite regress that comes as a package deal with naturalistic emergence theories (that is, the so-called theories of self-organisation that have little to do with reality or practicality).

Defining intelligence in this way, we can get away from the vicious circularity of naturalistic emergentism present contemporary self-organisation theories.
  • Emergentism postulates the hypothetical spontaneous emergence of particular properties of systems under certain conditions. E.g. self-organisation assumes unobserved spontaneous generation of formal functionality and cyberbetic control in complex systems.
Another important issue here is the ability to scientifically describe concrete ways by which intelligence can act on matter. I believe that the action of intelligence is inherently heuristic. It is quite interesting to note in defense of this statement, that animals and humans can sometimes solve practical combinatorial problems very efficiently (see e.g. this paper about the efficiency of solving the travelling salesman problem visually by practitioners).

Wednesday, 28 March 2012

On Evolutionistic Solipsism

Intelligent Design makes a straightforward and natural claim:
  • If an observed phenomenon is amenable to design inference (that is, if it is complex enough and functional), it is very likely that it was actually intelligently designed. In this case, conscious design is abductively the best explanation of its existence. Take for example the car engine. It has a high level of complexity. Its multiple parts are interrelated and tuned to function properly and it is characterised by formal utility as an integrated whole. We correctly assume that the engine is a product of design. Likewise, we observe formal utility in living organisms and infer to design by Intelligence.
Evolutionism castrates scientific thought and ascribes the generation of functional complexity to chance and necessity only. Consequently, it is a kind of solipsism.
  • Solipsism is a radical philosophical view that accepts one's own individual consciousness as the only thing sure to exist. It consequently denies the objective reality of the world [1].
As Richard Dawkins — perhaps one of the most implacable and aggressive neo-Darwinists today has it, we should think that design is only an apparent cause of life. Evolutionists personify natural selection by comparing it with a watch maker, sculptor, software engineer, poet, composer and architect [6]. However, they always make a mandatory reservation that in reality no such watch maker, sculptor or architect exists in person [5]. In their view, life — the most elaborate masterpiece of all arts — is but a consequence of blind chance and brutal force in varying combinations. A detailed plan of a multi-level building exists without the architect. Software exists without the programmer. Isn't it solipsism? 

As cyberneticist David Abel notes [4], the ability of mathematics — the archetype of formalism — to express the laws of nature tells us not only that human intelligence can formulate and work with powerful mathematical models, but also that our world, at least partially, allows formalisation. The fact that systematic observations point only to intelligence as the cause of formalism in this world suggests that our universe being amenable to formalisation is itself a product of Intelligence. In our egocentric solipsism – Abel continues – we cannot acknowledge this simple truth.

I can only add to this wonderful insight my personal observations. Many intellectuals in the West today are under a tremendous influence of Buddhism. It seems to me that what Abel pointed out is in fact one of the reasons why so many scientists are so entrapped. Buddhism essentially sees everything as an illusion and, by virtue of this, it is inherently solipsistic and thus it has a lot in common with evolutionism. 

I think we have to change the status quo in modern life sciences where agreement with philosophical commitments of a majority of scientists compromises the cause and the methodology of science. The roots of this phenomenon are in the unwillingness of the majority in the scientific community to accept the possibility of interpretations that are not in line with their materialistic way of thinking. This unwillingness, in turn, rests upon evolutionism which has been silently elevated to the rank of dogma, the only permitted mode of explanation. To get out of this dead end we need to rehabilitate:


  • teleology as a legitimate scientific and philosophical position (cf. teleonomy [2,3,7]);
  • and, in particular, Aristotelian choice contingency/purposeful design as an acceptable causal category.
Failure to do so will continue creating obstacles for unprejudiced thought in life sciences in future. 

References

  1. Wikipedia, Solipsism.
  2. Wikipedia, Teleonomy.
  3. Wikipedia, Теория П.К.Анохина функциональных систем (P.K.Anokhin's theory of functional systems, in Russian).
  4. D. Abel, The First Gene.
  5. R. Dawkins, The Blind Watchmaker. 
  6. W. Dembski, J. Wells, The Design of Life.
  7. Egnorance Blog, Teleonomy and Teleology.

Friday, 23 March 2012

Notes on the Margins

  • Teleology is as scientific as non-teleology:
    • The global teleological assumption that the universe has a purpose is as scientific as the contrary non-teleological assumption. That the former has been deprived of its right to be treated as scientific is wrong as far as philosophy of science is concerned. 
    • The status quo in science where materialism de facto enjoys a monopoly on interpretations of scientific findings is harmful to scientific enquiry in the long run.
  • Defending design/choice contingency as a factor of causality:
    • Without choice contingency in the set of causal factors (together with chance and necessity) it is impossible to develop an unbiased scientific understanding of intelligence, consciousness, life, and of the process of creation/functioning of complex artefacts.
    • The expulsion of choice contingency from the scientific method is explained by the said materialistic interpretational monopoly and violates the objectivity of scientific investigations.
  • "Correlation is not causation" [Dembski & Wells 2007]:
    • To say that genome homology proves common descent is logically wrong. It is just one possible interpretation of available data. Another possible interpretation is common design, although common design and common descent are not mutually exclusive. One can argue in favour or against particular interpretations (theories) but one cannot prove a theory. Common design was a very popular interpretation before Darwin. It is regaining its popularity now as more data becomes available.
  • "Science starts with figures" [Behe 1997], [Depew & Weber 2011]:
    • Neo-Darwinism has no underpinning mathematical core. I hazard a guess it will never do simply because it has a very limited predictive power (retrodicting the notorious missing links in geological strata appears inconclusive and unconvincing). On the other hand, mathematical formulas reflect the inherent formal laws underlying physical reality and, consequently, formulas themselves are a form of prediction. 
  • It is only choice contingency that has a genuine creative potential [Abel 2011]:
    • The entirety of scientific and technological experience of humanity suggests that a practically unlimited creativity potential which is usually wrongly attributed to evolution (i.e. to a combination of law-like necessity and chance) is only possible through a purposive choice of:
      • the initial conditions;
      • the architectural design and parameter tuning of the system's functional components;
      • the material symbol system in semiotic information transfer between the components of the system.
    • Law-like necessity and chance are not enough for an algorithm to function. The functioning of algorithms can only be made possible via choice contingency, a special causal factor.
  • Spontaneous generation of novelty per se is not observed [Ewert et. al. 2012]:
    • Genetic algorithms require careful parameter tuning before they can work. They work by merely rearranging various parts of a single whole. The intelligent power of GAs should not be overemphasised.
    • Any algorithm (and GAs in particular) is teleological because it has a goal state to reach. Inanimate matter is inert to goals and consequently it could not have possibly generated the genome. 
    • Self-tuning/learning algorithms, as well as any others, are intelligently created. The self-tuning and learning behaviour is but concentrated and formalised expertise of human professionals. 
  • A working algorithm without the conscious programmer is nonsense:
    • The possibility of there being algorithms capable of working with the genome points to intelligent agency behind that algorithm more reliably than it does to chance and/or necessity: in inanimate unconscious matter formalisms do not spontaneously arise. 
    • If it is impossible to have an algorithm without the programmer, it is ridiculous to suggest that error-correcting and noise-reducing code should come about by trial and error (the generation of novelty is attributed to errors during replication in neo-Darwinian models [Dawkins 1996]).
    • There is a general consensus around the fact that DNA/RNA is in fact code: it has a certain alphabet, syntax and semantics. Anyone who has done at least some programming will agree that adding new functionality is achieved not by mutating the existing code chaotically and relying on the customer to take it or leave it as is (similar to neo-Darwinian models where mutations and natural selection acting on genetic code are believed to lead to novelty). On the contrary, in practice new functionality appears in code as a result of the following formal process: formulation of requirements -> proof of concept -> implementation -> testing. A new module or data structure gets added to the existing code at once (not through symbol-by-symbol or line-by-line trial and error). It happens as a result of consious work of a team of experts. 
      • That novelty is a result of purely (huge) probabilistic resources available to evolution can no longer be accepted. The probabilistic resources (i.e. the total number of realisable configurations over a given amount of time) are in fact limited and this can be easily demonstrated. For the step-by-Darwinian-step accumulation of sufficient genetic information in order to get life going, the whole terrestrial lifespan is not enough by many orders of magnitude, even using very liberal estimates (details here).
      • Hypotheses of non-intelligent extraterrestrial origins of life (e.g. panspermia) in principle cannot resolve the issue of the initial conditions and complexity required for living organisms: contemporary estimates of the age of the universe and of the age of the Earth are of the same order of magnitude (1017 seconds).
  • Information exchange in semiotic systems (biosystems included) is impossible without preliminarily setting a common [Виолован 1997]:
    • Alphabet;
    • Syntax;
    • Protocol of information coding/decoding including an agreement about semantics between the sender and the receiver.
  • Life comes only from life (the main principle of vitalism):
    • We have absolutely no evidence of the emergence of life from non-life. Such a scenario is better suited for Sci-Fi rather than for science.
    • Abiogenesis was seriously put to doubt as a result of the work by Luis Pasteur, who in 1860-62 demonstrated the impossibility of self-organisation of micro-organisms.
  • The spontaneous generation of intelligence is also Sci-Fi rather than science:
    • The scientific and technological progress of humanity testifies to "the golden rule" that everything comes at a price. We pay a lot to get but a little. So machines will stay machines and will never be able to replace human intelligence entirely.
    • If that is true and the strong AI hypothesis does not hold, then human intelligence did not come about spontaneously either.
  • Non-function does not by itself become function. To claim the opposite would be equivalent to saying it is possible for a man to pull himself by the hair to get out of a swamp:
    • The spontaneous emergence of function out of non-functional components is not observed in nature. Here functionality means orientation to purpose, usefulness or utility of parts of a single whole [Abel 2011].
    • Inanimate nature is inert to purpose or function. It does not care whether things are functional or not [ibid].
      • In case someone wonders, preadaptation is a different story as it involves switching between functions in a given biological context rather than the emergence of function out of non-function.
  • The universe is fine-tuned for intelligent life (the strong anthropic principle, see [Behe et al. 2000]):
    • The multiverse hypothesis is non-scientific by definition since the scientific method assumes direct observation and analysis of things happening in our world. So an indirect hypothetical increase in probabilities of incredibly implausible events in this world as a result of the introduction of parallel universes does not count.
    • The weak anthropic principle attempts to dismiss the need to explain the fine-tuning of our universe by positing that "if man had not been around, there would not have been any observers of fine-tuning". This is not a way out either because correlation is not causation, as we pointed out earlier. The weak anthropic principle fails to show that the two following questions need not be answered:
      • Why is there an unbelievably lucky coincidence of universal constants?  and
      • Why do they need to be so subtly tuned?

Bibliography

  1. D. Abel (2011), The First Gene. 
  2. M. Behe (1997), Darwin's Blackbox.
  3. Michael J. Behe, William A. Dembski and Stephen Meyer (2000), Science and Evidence of Design in the Universe, Ignatius Press. 
  4. R. Dawkins (1996), The Selfish Gene.
  5. W. Dembski and J. Wells (2007), The Design of Life.
  6. David J. Depew and Bruce H. Weber (2011), The Fate of Darwinism: Evolution After the Modern Synthesis, Biological Theory, 2011, Volume 6, Number 1, Pages 89-102.
  7. W. Ewert, W. Dembski, R. Marks (2012), Climbing the Steiner Tree—Sources of Active Informationin in a Genetic Algorithm for Solving the Euclidean Steiner Tree Problem, Bio-Complexity Journal (open access).
  8. К. Виолован (1997), Проблемы абиогенеза как ключ к пониманию несостоятельности эволюционной гипотезы. Просветительский центр "Шестоднев" (in Russian).

Monday, 19 March 2012

In Defense of Vitalism

I believe ... in the Holy Spirit, the Lord, the Giver of Life...
from Nicene-Constantinopolitan Creed, AD 325-381.


St Great Martyr and Healer Panteleimon
(Byzantine icon, beginning of the 13th century)

Recent biochemical research demonstrates the implausibility of a spontaneous origin of life [Abel, Axe, Behe]. It is clear that biological systems exhibit irreducible complexity of their functional core [Behe] irrespective of the dispute about its origin. 

Evolutionists usually claim that the irreducible complexity of the functional core of biosystems that are available today, is not due to their being intelligently designed but may be a result of the initial spontaneous redundancy of proto-biosystems coupled with the gradualism of natural selection. However, this claim does not stand a complexity analysis of functional information necessary to organise and replicate biofunctional structures given the terrestrial probabilistic resources. Indeed, the hypothesis of the spontaneous formation of the irreducibly complex core of a proto-biosystem is credibly below the terrestrial plausibility threshold for any thinkable physico-chemical interaction driven by chance and necessity only [Abel]. Even more so are hypothetical redundantly complex precursors of contemporary irreducibly complex biosystems. Any suggested hypothetical evolutionary path to what we know today to be irreducibly complex involves multiple steps each of which is associated with an incredibly low probability. Liberal estimates of these probabilities are operationally zero [Behe]. 

On the other hand, contemporary self-organisation theories [Prigogine, Kauffman, Eigen] fail to acknowledge the simple fact that organisation from non-function to function has never been observed to emerge spontaneously - to say nothing of it being persistent - in contrast to the readily observable spontaneous generation of  low-informational regularity of matter.

When biochemical/genetic evidence is analysed objectively, it appears that: 
  • Life was started off via intelligent agency. 
  • Its functioning was engineered to be autonomous and persistent from the start by making sure the ensemble of its functional parameters was "tuned away" to a small isolated target zone to disallow uncontrolled dissipation of energy. This agrees with the objections of Ikeda and Jefferis to the naive hypothesis of fine-tuning of the universe. 
  • Life's built-in evolutionary tolerances allowing it to adapt to varying environments are quite tight in practice. Even if they were not, the spontaneous unguided formation of new taxa would require the gradual accumulation of new functional information which, in turn, would take orders of magnitude more time than what the currently accepted bounds on the age of the universe allow.
This brings us to the conclusion that life is special: it cannot be reduced to chemistry alone but requires the purposive execution of control over its initial parameter settings, recordation of these settings in genetic instructions and processing those instructions during replication. However, inanimate nature being inherently inert to control, is blind to the choice of means to pursue a goal, which is necessary to produce functional persistent systems.

Literature


1. David Abel, The First Gene.
2. Douglas Axe, Estimating the prevalence of protein sequences adopting functional enzyme folds.
3. Michael Behe, Darwin's Blackbox.
4. Michael Behe, The Edge of Evolution.
5. M. Eigen, P. Schuster, The Hypercycle: A principle of natural self-organization, Springer, Berlin, 1979.
6. I. Prigogine, I. Stangers, Order out of Chaos.
7. Stuart Kauffman, Origins of Order: Self-Organisation and Selection in Evolution.

Monday, 5 March 2012

ID in a Nutshell

What is Intelligent Design

Intelligent Design (ID) is a theory based on empirical observations. It posits that it is possible to infer to purposeful design of an object post factum if certain conditions are satisfied. These conditions include:
  1. High enough Kolmogorov complexity of the object's description using a given alphabet and a given universal description language, and  
  2. An independently given specification the object complies with (notably, functional specification). 
The main idea of design inference can also be formulated as follows: under the above conditions intelligent interference in creation/modification of observed patterns of matter is abductively the best (simplest and most reliable) explanation of the observation.   

The high complexity level and specification serve to exclude the possibility of the object being a result of chance and necessity (or their combinations) on the gamut of a given reference system (such as the Earth, the Solar system or the universe) [Dembski 2007]. Objects amenable to design inference are said to bear sufficient amounts of specified complex information (SCI). There are various metrics available to measure this information [UncommonDescent]. 

Note that while it is not possible to infer to design of an object whose description is of low complexity, it does not necessarily mean it has not been designed. In other words, the complexity and specification are sufficient conditions for design inference but not necessary.

Some ID Links and Names

  1. The main ID blog: www.uncommondescent.com 
  2. Evolution News and Views: http://www.evolutionnews.org/ 
  3. Discovery Institute: http://www.discovery.org/ 
  4. Biologic Institute: http://www.biologicinstitute.org/ 
  5. The evolutionary informatics lab: http://evoinfo.org/
  6. Bio-Complexity Journal: http://bio-complexity.org/ojs/index.php/main

Some of the leading ID theorists are William Dembski, Michael Behe, Stephen Meyer, Jonathan Wells, David Abel and Douglas Axe. Michael Denton, Paul Davies, Fred Hoyle, Steve Fuller, David Berlinski and Leslie Orgel can, undoubtedly, be counted as proponents of telic ideas in the large or design in particular. Interested readers are invited to search for their books or publications in refereed journals.

ID and the World of Refereed Publications

Quite often ID is criticised  for not being scientific. The critics say, okay well books are alright, but you have to publish in peer-reviewed media, which involves a good deal of scientific scrutiny. Here is a list of peer-reviewed ID publications: http://www.discovery.org/a/2640.

Where To Start

The UncommonDescent blog has a FAQ page and a series of posts titled as ID Foundations which I recommend to start from if you are interested. 

The Gist of ID

1. Plausibility problems of abiogenesis and macroevolution.

The essential ID claims are based on statistics. The main argument is that for anything functional you have to set multiple independent system parameters to specific "function-friendly" values (Figure 1). In a majority of cases, the functional values are located in small target zones. Consider a car engine, for instance. It's got a standard range of temperatures, pressures, tolerances in the carburetor, the ignition sequence and timings as well as many other parameters. They all need to be set appropriately so that the engine functions correctly. A haphazard combination of values just won't work in an overwhelming majority of samples. Obviously, the more independent parameters need to be set, the less the probability of a successful haphazard set of values becomes. At some point, it will become operationally zero.
  • Specifications necessary for design inference determine the shape and size of parameter target zones. Clearly, the zones must be sufficiently small for credible empirical inference to conscious design.


Figure 1. The area of functional parameter value combinations for an abstract system that has three parameters: P1, P2 and P3. The functional parameter value ranges are shown in green.
Clearly, the number of parameter combinations is subject to a combinatorial explosion, so statistically on the gamut of the observed universe, without intelligence there is simply 0 chance of something functionally complex organised spontaneously.

In practice, complex systems often exhibit what is known as irreducible complexity whereby each of a subset of their components is indispensible in the sense of contributing to a joint function [Behe 1994, Behe 2007]. As soon as at least one of the components fails, the original function is compromised/changed. Irreducible complexity relates to the concept of maximal independent sets in mathematics. Some examples include systems utilizing certain chemical reactions such as autocatalysis, systems whose functioning is based on resonanse, etc. Leaving the discussion about the origins of irreducibly complex biosystems aside, it can be demonstrated that biological systems as they are today have an irreducibly complex functional core [UncommonDescent]. Indeed, very complex structures such as living cells need to make sure they have correctly co-working subsystems of metabolism, replication, reaction to stimuli etc. at the start of life, where Darwinian gradualism does not work!

We can show by means of simple calculations that such an imaginary event as self-assembly of the proto-cell cannot plausibly have occurred in the entire history of the world [Abel 2009]. To do this, we can determine an upper bound on the number of events such as physico-chemical interactions on the Earth as follows. Knowing the highest rate R of chemical reactions (of the order 10-13 s), the age A of the Earth (~1017 s) and the number N of molecules on the Earth (~1040), we have:


Nmax = A * N / R = 1017 s * 1040 / 10-13 s = 1070.

a very liberal estimate of the max possible number of physico-chemical interactions that can have occurred. 1/Nmax gives us what is called the universal plausibility metric for the respective reference system (in this case, our planet).

A practical optimistic information threshold for descriptions in binary form is 500 bits, which helps us encode 3.27 * 10150 various states/configurations of our complex system. An example of such a system is a protein molecule (Figure 2) which has multiple amino acid residues: from 150 to a few thousand per domain (its structural/functional unit). So various configurations of the system will correspond to various residue sequences in a protein domain. 

Figure 2. DNA-binding domain and a portion of a DNA molecule. Schematic representations of molecular surfaces (top) and of a tertiary structure of the complex (bottom). Source: Wikipedia, Transcription factors (in Russian).


As an illustration, independently from the plausibility threshold calculations, biologists have recently found that a functional sequence of amino acids in a protein occurs once in every 1077 sequences on average [Axe 2004]. So this is implausible without design on the gamut of terrestrial interactions!


The practical bound of 500 bits (10150 states) is a liberal threshold. Indeed, using the same chain of reasoning as in the case of terrestrial intercations above, we can establish that on the gamut of the Solar system there can be realised at most 10102 quantum Planck states. So our bound is 48 orders of magnitude more than that, which is liberal enough. However, in terms of human language texts, 500 bits correspond to just 60 odd characters. This demonstrates the implausibility of spontaneous emergence of meaningful texts longer than that (e.g. the text you are reading now)! Note that by this logic, the plausibility of gradual accumulation of information is also taken into consideration and consequently ruled out.

Existing simplistic models of macroevolution such as [MacKay 2003] do not take into cosideration the fact that due to chaos there may not be a Darwinian selectable path from one taxon to another (see here). Consequently, in practice the rate of information accumulation in biosystems acted upon by mutations, recombination, drift and natural selection can be much lower than in theory. E.g. according to [MacKay 2003], the amount of information passed from parents to children via genetic recombination induced by sexual reproduction with some favourable assumptions is of the order G , where G is the size of genome in bits. In my opinion, the emergence of sexual reproduction itself as well as many other mechanisms used in biosystems and usually attributed to evolution, cannot be adequately explained by the evolutionary phenomena alone.

[Durston et al. 2007] deals with a very important question about the plausible source of functionally specified information in biosystems. Functionally specified information carried by genetic prescriptions is associated with the functionality of protein molecules. By definition, the amount of functionally specified information is proportional to the ratio of the number of amino acid sequences coding for a given function to the number of all possible sequences of a given length or less (for more details, see here). The statistical analysis presented by [Durston et al. 2007] leads to the following considerations:
  • Protein domain functionality coded by genetic prescriptions cannot be plausibly a result of spontaneous or law-like factors which are nothing more than physico-chemical constraints. The translated amino acid sequences code for control and consequently are choice contingent. The semantic cargo of genetic instructions statistically rules out law-like necessity and chance contingency.
  • Biological function is deeply isolated in the phase space, which in light of the above plausibility considerations statistically rules out successful Darwinian search or blind search.
  • The fitness function in a Darwinian scenario must contain sufficient amount f functional information to guide the search towards areas with more solutions. Otherwise search becomes blind and even less chance to encounter solutions (a genome must be able to code for a minimum of 382 proteins). On the other hand, the only source of functional information that is known to be statistically plausible is intelligence.

Consequently, unguided formation of different genera, classes or phyla appears extremely unlikely given the terrestrial probabilistic resources. Of course, these observations do not rule out evolution as such but they strictly limit its effects to microevolution (maybe to within genera or species). This strongly suggests that there is no single tree of life, but a forest of phylogenetic trees. It is a future task of experimental bioinformatics to establish the number and shape of each tree in practice. Based on [Durston et al. 2007], we conjecture that significant amounts of functional information are needed not only to generate functional proteins but also to generate genomes of higher taxa from those of lower.

  • See a presentation on ID by Kirk Durston here, which I highly recommend.

2. Cybernetic problems of the genesis of complex functional systems.

From a different angle, functionality itself points to purposeful conscious design because nature does not care about functionality [Abel 2011]. It only provides constraints. In contrast, controls/semantics/functionality in practice are always superimposed on top of physicality by a conscious agent/decision maker. E.g. in the TCP/IP stack of protocols the semantics of information transfer is determined at the application level, it is then passed down to the physical level whereby it is transferred as a sequence of voltage jumps. If you like, nature acts only on the physical level, the remaining levels of the stack being organised by conscious decision makers (for details, see my note here).

So whenever there is cybernetic control (defined as a means to drive a system towards utility of any kind), its existence itself is a reliable marker of purposeful design. Matter being inert to utility can generate only redundant low-informational regularities such as dune patterns, crystals, wave interference, etc. (Figures 3-6). However, functional systems need control to function. 


Figure 3. A coffee foam pattern.
 
Figure 4. Dune patterns.
Figure 5. Crystals of water on the grass. 


Figure 6. Sine wave interference. 


Summing up, the presence of either of the following reliably points to design:
  • Functionality
  • Formalism
  • Semiosis (use of a material symbol system and a mutually agreed protocol for information exchange between the sender and the receiver)
  • Cybernetic control
  • Prescriptive information (e.g. recipe-like instructions written using a designated alphabet, language and semantics found in DNA/RNA)
These things are all characteristic of living organisms. We therefore conclude that life is an artefact. Our conclusion is drawn in compliance with the scientific method and is based entirely on observations of reality.

If it was not for a highly ideological bias in the scientific community of today, scientists would all agree that life appeared as a result of Creation. Whenever the issue of the origin of life is not in focus of the discussion, the evolutionist critics are happy to agree with ID argumentation. ID principles are used in forensics, sociology, medicine, etc without causing dispute.
  • To have an idea of how heated the discussion is about whether ID is science and whether it is legitimate to teach ID in US educational institutions, watch Ben Stein's "Expelled: No Intelligence Allowed" (2008), available on YouTube in full.
Examples of Using ID logic in practice

For examples of how ID can be used in practice please see my note here.
Mass, energy, time and information

Finally, I would say a few words about intelligence. The development of science and technology over the XX century, in my opinion, forces us to recognise that information is not reducible to physical interactions that are routinely described in terms of mass/energy. Using a popular example due to Stephen Meyer, the information content of a newspaper article is not reducible to the particular arrangement of typographic paint on paper. True, we do not know what intelligence actually is. However, this is not a science stopper. In the same way, our ignorance about the nature of time or space does not stop us from formulating theories, constructing and using models involving those categories. ID posits that intelligence also cannot be reduced to mass/energy (cf. the insightful formulation of the Formalism > Physicality principle by David Abel [Abel 2011]). Nonetheless, the effects of an informational impact on a material system resulting in specific configurations of matter can under certain conditions be detected and quanitified. 

Caveat for Orthodox Christian Readers

With all due respect to the ID masterminds and their scientific work, as an Orthodox Christian priest I should point out that ID movement is dominated by Protestant thought. So I think the scientific gist of ID should be distinguished from its philosophical framework.

Some of My More Detailed Posts on ID


References
    1. David L. Abel (2011), The First Gene: The Birth of Programming, Messaging and Formal Control, LongView Press Academic: Biolog. Res. Div.: New York, NY.
    2. David L. Abel (2009),  The Universal Plausibility Metric (UPM) & Principle (UPP). Theoretical Biology and Medical Modelling, 6:27.
    3. Douglas Axe (2004), Estimating the Prevalence of Protein Sequences Adopting Functional Enzyme Folds, Journal of Molecular Biology,Volume 341, Issue 5, 27 August 2004, Pages 1295-1315.
    4. Michael Behe (1994), Darwin's Blackbox: The Biochemical Challenge to Evolution.
    5. Michael Behe (2007), The Edge of Evolution: The Search for the Limits of Darwinism.
    6. Richard Dawkins (1996), Climbing Mount Improbable.
    7. William Dembski (2007), No Free Lunch: Why Specified Complexity Cannot be Purchased without Intelligence, Rowman and Littlefield Publishers, 2007.
    8. Durston, K.K., D.K.Y. Chiu, D.L. Abel and J.T. Trevors (2007), Measuring the functional sequence complexity of proteins, Theoretical Biology and Medical Modelling 4:47. [doi:10.1186/1742-4682-4-47]
    9. David MacKay (2003), Information Theory, Inference, and Learning Algorithms. Cambridge University Press.
    10. UncommonDescent.org, ID Foundations.