Sunday 11 November 2012

Major issues facing abiogenesis and theories of evolution

In the picture below I present some of the key points in the natural history of life which abiogenesis and the current theories of (undirected) evolution must be able to explain in detail today. A detailed explanation (rigorous, scientific explanations with figures, not the usual evolutionary story-telling) of any of those milestones appears to be a major issue for any hypothesis or theory based entirely on non-intelligent mechanisms, given the statistical implausibility of an origin involving spontaneous and law-like causal factors alone.

Now, ID posits that for each such milestone, a significant amount of functional information is necessary, while the only statistically plausible cause of such is intelligence, on the gamut of the available probabilistic resources in the entire universe. Having said this, to date the existence of significant amounts of functional information in the context of biosystems has been credibly shown only for functional protein domains, the biochemical building blocks of life [Durston et al. 2007].

Nonetheless, with the rise of ID, evolutionary story-telling is no longer acceptable. Science requires rigour, concrete numbers and plausibility.

Figure 1. Some of the major milestones in the natural history of life. It is hypothesised that each milestone requires the generation of significant amounts of functional information, which for the case of functional protein domains (yellow circle) has successfully been demonstrated empirically in [Durston et al. 2007].

Bibliography


  1. Durston, K.K., D.K.Y. Chiu, D.L. Abel and J.T. Trevors (2007) Measuring the functional sequence complexity of proteins", Theoretical Biology and Medical Modelling 4:47. [doi:10.1186/1742-4682-4-47]

Saturday 3 November 2012

Link to Kirk Durston's Presentation on ID

Here is a link to a very interesting presenation on Intelligent Design by Kirk Durston, which I highly recommend as a must-see on this topic.

Tuesday 30 October 2012

An interesting paper on modelling biosystems with cellular automata

It is known that biosystems can be modelled by cellular automata (CA). Cellular automata are regular structures in an N-dimensional space which are composed of finite size cells that each can be in one of a number of states. There are also a number of rules specified for state transitions. John von Neumann showed that there exist cellular automata capable of modelling biosystems. These automata are dynamically stable, self-replicating and Turing-equivalent (i.e. they are in fact a universal computer).

Interestingly, in the configuration spaces of biosystems there can be isolated functional zones amid chaos. When you think of this, this is actually expected because in reality a complex enough functional system is bound to have zones of function-compatible parameter values. The relative size of such a zone tends to be smaller as the number of parameters and their value criticality for normal operation of the system grow. The same holds for biomachines even taking into consideration their high adaptability. This is what undermines both classical Darwinism and the synthetic evolution. If two given taxa belong to two separate parameter zones, there is no selectable Darwinian path from one to the other.

Earlier we discussed the options for the blind Darwinian search in a configuration space. Here is another very interesting paper on the possibility of reaching target zones by Darwinian means (mutations + drift + natural selection) [Bartlett 2008], which summarises the results of mathematical modelling of the evolution of biosystems using cellular automata. 4 complexity classes of CAs have been identified:
  1. automata that always arrive at a homogenous state no matter what their initial state was;
  2. automata that have a finite sphere of influence for outcomes even if the computation was carried out an infinite number of steps;
  3. chaotic systems, where initial states do not have either a bounded sphere of influence nor do they produce predictable results. However, they do have stable statistical properties;
  4. automata that exhibit a hybrid of both the periodic behavior of Class 2 systems and the chaotic nature of Class 3 systems. More importantly, they are unpredictable both in their exact outcomes as well as well as in their statistical properties. These CAs are Turing-equivalent.
Since class 4 systems are chaotic to some degree, the mapping between changes in code and changes in the outcome (genotype-phenotype correlation) is also chaotic. Consequenly, in this case a Darwinian selectable path from one taxon to another does not exist.

An interesting property of class 4 automata is the dependence of the degree of chaotisation and of evolvability on the implementation language. Phenomena similar to irreducible complexity [Behe 1996] are, as a matter of fact, not invariant with respect to the implementation language. According to Bartlett, the existence of irreducible complexity and similar phenomena points to the existence of a higher order genome evolution control. Notwithstanding the evolvability of feedback loops, as Bartlett points, they must be initially loaded into biosystems, since feedback loops never arise spontaneously in reality. In conclusion, we just point out that the presence of control over evolutionary processes is, in some sense, the demise of evolution per se (never mind the inevitable oxymoronic phrase "evolutionary process") [Abel 2011], since evolution by definition is undirected.


References
  1. David Abel (2011): The First Gene.
  2. Jonathan Bartlett (2008): Wolfram's Complexity Classes, Relative Evolvability, Irreducible Complexity, and Domain-Specific Languages. Occasional Papers of the BSG 11:5.
  3. Michael Behe (1996): Darwin's Blackbox.

Friday 26 October 2012

What is this functionally specified information after all?

From complexity theory we know that any object can be represented by a string of symbols from a universal alphabet. So we shall concentrate on strings for convenience and clarity. Consider three strings of the same length (72 ASCII characters):
  1. "jjjbjjjbjjjbjjjbjjjbjjjbjjjbjjjbjjjbjjjbjjjbjjjbjjjbjjjbjjjbjjjbjjjbjjjb"
  2. "4hjjjqq,dsgqjg8 ii0gakkkdffeyzndkk,.j fnldeeeddzpGCZZaQ12 9nnnsskuu6 s."
  3. "Take 500 g. of flour, 250 ml of water, 2 tsp. of sugar, 1egg. Mix, whip."

String 1 exhibits high periodicity, it is Kolmogorov simple (highly compressible), monotonic, redundant and highly specific.


String 2 is a collection of random symbols (well, we assume they are genuinely random for the sake of argument). String 2 is a lot more complex than String 1, non-compressible and lacks specificity.


String 3 is from a food recipe and is therefore a message unlike the other two strings. Importantly, string 3 carries associated functionally specified information. Failure to recognise this fact leads to a widespread error which claims that strings 1, 2 and 3 are equiprobable in the hypothetical process of spontaneous information generation that might have raised biological function. In our example, the function associated with string 3 is the instruction to cook that dish.


Is it possible to rigorously define functionally specified information associated with a symbolic string? Is it possible to measure it? The answer is yes to both questions. In the literature on the subject [Hazen et. al 2007, Szostak 2003], information associated with a specific function f is defined as follows:


Here M(f) > 0 is the number of strings prescribing function f, W is the total number of possible strings of or up to a given length. Clearly, M(f) ≤ W. We can see that reducing specificity (raising the ratio to 1.0) reduces the amount of specified information and vice versa.

Note that specificity of information in a string of symbols determines the probability distribution of a given symbol appearing at various positions in the string. E.g. if we move the symbols "Take 500 g. of flour" from the beginning to the end of string 3, its functional integrity will be compromised. Expectedly, Douglas Axe [Axe 2004, Axe 2010a,b] came to the same conclusions after conducting a series of experiments aimed at studying the properties of protein domain functionality in bacteria: the functionality of protein domains has been found to be deeply isolated in the configuration space, whereas the number of amino acid residue permutations is hugely greater than the number of permutations where the given function is preserved. Neither highly periodic strings of type 1, nor random strings of type 2 are capable of carrying practically significant amounts of specific functional information.


The above is not taken into consideration in various experimental attempts to imitate the hypothetical spontaneous function generation, for example, in the form of meaningful text. I saw somewhere on the internet a discussion about an allegedly successful random generation of the first 24 consecutive symbols of a Shakespearian comedy. Assuming such a sequence was generated, the real question is what the generation algorithm was given for granted. The major problem with this kind of numerical experiments is that they a priori assume the existence of an interpretation protocol uploaded into the system (here it is the English language). Besides, computer code (e.g. the notoriously known Weasel program by R. Dawkins) in these experiments is often tuned to drive the search towards the goal state in the form of some chosen phrase. More subtle ways exist to sneak information about the solution distribution into the search. It is possible to implicitly drive the search even when the target phrase is not given in advance. As a result, the search is still made aware of the likelihood of meeting a solution in different areas of search space. E.g. given a structure of words and sentences in a language we have information about the mean frequencies of various letters which can be used in building random phrases. For more detail on how one can use during search so called active information about solution distributions in the search space, see [Ewert et al. 2012].


Earlier we pointed out that strings are useful mathematical representations of reality. Consequently, various configurations of actual material systems also carry certain quantities of functionally specified information. And this information can be measured, which is what is already being done. [Durston et al. 2007] proposed a method to quantify functionally specified information in proteins based on the reduction in functional uncertainty in functional sequences of nucleotides compared to a null state where any nucleotide sequence is equiprobable (total loss of function).


An objective analysis of functionally specified information found in nature leads to the following conclusions. High quantities of functionally specified information are detected only in human artefacts (such as complex information processing systems, natural or computer languages) and in biosystems. Consequently, we are entitled to infer by induction that life's origin is also artificial. This scientific inference can as a matter of course be debunked experimentally. To disprove it, it is sufficient to
empirically demonstrate that:
  • There exists a mechanism utilizing only spontaneous and law-like causal factors, which is capable of generating and applying a protocol of information processing in a multipart system;
  • This mechanism can spontaneously generate and unambiguously interpret long enough instructions in agreement with the protocol; the length of instructions in a language compatible with the protocol should at least be equivalent to 500 bits of information, as is the case with 72 ASCII characters (with the exception of special characters) of meaningful text such as string 3 in the above example. For biosystems, according to Durston, granting the highest possible replication rate for the entire span of natural history (4.5 billion years) the information threshold is 140 bits (20 ASCII text symbols, respectively).

References

  1. D. Axe (2004) Estimating the Prevalence of Protein Sequences Adopting Functional Enzyme Folds, Journal of Molecular Biology,Volume 341, Issue 5, 27 August 2004, Pages 1295-1315.
  2. D. Axe (2010a) The Case Against a Darwinian Origin of Protein Folds, Biocomplexity Journal.
  3. D. Axe (2010b) The Limits of Complex Adaptation: An Analysis Based on a Simple Model of Structured Bacterial Populations. Biocomplexity Journal.
  4. Durston, K.K., D.K.Y. Chiu, D.L. Abel and J.T. Trevors (2007) Measuring the functional sequence complexity of proteins", Theoretical Biology and Medical Modelling 4:47. [doi:10.1186/1742-4682-4-47]
  5. W. Ewert, W. Dembski, R. Marks (2012) Climbing the Steiner Tree—Sources of Active Informationin in a Genetic Algorithm for Solving the Euclidean Steiner Tree Problem, Bio-Complexity Journal.
  6. Hazen R.M., Griffen P.I., Carothers J.M., Szostak J.W. (2007) Functional information and the emergence of biocomplexity, PNAS, 104:8574-8581.
  7. Szostak JW (2003) Functional information: Molecular messages. Nature 2003, 423:689.

Tuesday 23 October 2012

On Two Competing Paradigms

We briefly compare two different systems of axioms: the evolutionist (emergentist) paradigm and the one based on intelligent design (ID) detection. The list below reveals the weaknesses of evolutionism as a philosophical basis of life sciences. The paradigm based on the theory of intelligent design detection, in contrast, appears to be more powerful and more generic. The evolutionary mechanisms play here a secondary role and are quite limited, which reflects reality better. From the science philosophical point, the fact that the problem of apparent design in biosystems is adequately addressed is a definite strength of ID. Evolutionism, on the contrary, claims that design in biosystems, which is obvious to an unbiased observer, is but an illusion. Since evolutionism does not include choice contingent causality in the set of legitimate causal factors, its axiomatics cannot stand the challenges of information theory and cybernetics. In contrast, the inclusion of choice as a causal factor—as is done in ID—allows us to fully appreciate the commonality between complex artificial systems of information processing and living organisms in that they share cybernetic control, formalism and semiosis.

  • Apologies for the long list. Unfortunately, tables here at blogspot.co.uk are not parsed correctly by Internet Explorer.

  1. Causal factors used in scientific analysis
    1. Evolutionism: Chance contingency, necessity. The legitimacy of choice is denied.
    2. Intelligent Design Detection: Choice contingency, chance contingency, necessity.
  2. Infinite regress
    1. Yes.
    2. No (on condition that intelligence is assumed to transcend matter).
  3. The problem of initial conditions
    1. Needs to be solved. The probability of randomly hitting a functional target zone in a vast configuration space is below the plausibility threshold on the gamut of terrestrial interactions.
    2. Solved.
  4. Design of biosystems
    1. Claimed an illusion.
    2. Acknowledged in principle.
  5. Teleology and goal setting
    1. Local. The existence of a global goal, foresight or planning is denied on all levels be it in the universe or in human life.
    2. Local and global. Hierarchies of goals. The goal of an object is thought extraneous to it in relation to other objects.
  6. Forecasting
    1. Problematic and only retrospective.
    2. Possible.
  7. Origin of control, formalism and hierarchy in complex systems
    1. Spontaneous. No empirical data exists to support the claim. Statistical implausibility of spontaneous generation thereof is illustrated by the infinite monkey theorem.
    2. Purposive intelligent generation supported by massive observation (complex artificial systems and bioengineering).
  8. Primacy of constraints vs. rules
    1. Constraints are primal to rules.
    2. Rules are primal to constraints.
  9. Spontaneous generation of regular configurations (self-ordering) in open thermodynamic systems
    1. Accepted.
    2. Accepted.
  10. Possibility of detection of intelligent agency in the origins of biosystems
    1. Denied.
    2. Accepted.
  11. Adaptational and preadaptational mechanisms and the possibility of their change
    1. Accepted. The changes are postulated unguided.
    2. Accepted. The possibility and the limits of change are determined by the uploaded meta-rules.
  12. Image of the biosphere in the configuration space
    1. A continent of functionality. Plasticity of biosystems [Darwin 1857].
    2. An archipelago: islands of function in the ocean of chaos. Adaptational movement is limited to occur within particular islands.
  13. Geometric image
    1. A single phylogenetic tree (classical Darwinism) or a network (considering epigenetic phenomena).
    2. A phylogenetic forest (short trees) or a set of low-cycle graphs (considering epigenetic phenomena).
  14. Common descent
    1. Postulated (sometimes the possibility of several common ancestors is acknowledged ). Genome homology is used as proof/
    2. Accepted in principle. Genome homology can alternatively be viewed as a result of common design (by analogy to software design process or to text writing).
  15. The strong anthropic principle
    1. An illusion. The problem of fine-tuning the universal constants for compatibility with life is claimed non-existent.
    2. The necessity to solve the problem is acknowledged. Methodological means to solve it are provided.
  16. The problem of defining and studying the phenomenon of intelligence
    1. Reduced to the basic physico-chemical interactions (physicality).
    2. Addressed in its own right without reduction to physicality. Information semantics cannot and is not reduced to semiotics or physicality. The meaning of a message cannot be reduced to the mere physics of the information channel.
  17. The problems of psychology and human psychics
    1. Induced by the basic physico-chemical factors. Feelings, cognitive and rational activities are ultimately reduced to the four basic physical interactions.
    2. Include the physico-chemical aspect but cannot be exhausted by it. Solved in their own right in view of their irreducibility to physicalty. The hierarchy of complexity of reality is acknowledged. The succession non-living matter-> life-> intelligence-> consciousness is analogous to the hierarchy classes of decision problems (see here).
  18. Free will
    1. An illusion. In essence, it is denied since all is determined by chance and necessity and, ultimately, by the four basic physical interactions.
    2. Acknowledged since legitimate causality includes choice contingency.
  19. Creativity
    1. Practically non-existent since it is substantially limited by the probabilistic resources available in a given system while creative causality is entirely represented by chance alone. Necessity of selection acts passively and therefore cannot be a factor of creativity.
    2. Possible in the full sense by means of impacting functionally specified information to systems via programming the configurable switch bridge [Abel 2011].
  20. Origin of life
    1. Abiogenesis: spontaneous generation of protobiological structures from inorganic compounds.
    2. Purposive generation of initial genomes, tuning the homeostatic state, replication, reaction to stimuli, etc. Uploading the protocols of genetic instruction interpretation as well as meta-rules controlling the limits of adaptive change in those protocols.
  21. Isolatedness and extreme rarity of functional protein domains
    1. The model to explain functional domain generation includes mutations, drift and selection. Does not satisfy the statistic plausibility criterion. Extreme rarity and deep isolation of biofunction in the configuration space in practice rules out incremental solution generation by blind unguided search on the gamut of terrestrial physico-chemical interactions. This problem is inherently related to the problem of initial conditions (an unacceptably low probability of hitting the target zones of parameters compatible with life).
    2. Purposive intelligent generation of functional domains. Its proof of concept is presented by bioengineering.
  22. The role of mutation, recombination, selection and drift
    1. Primary in explaining all observed biodiversity.
    2. Secondary, adaptational within the given higher taxa.
  23. The universal plausibility criterion [Abel 2009]
    1. Not satisfied: the probability of spontaneous generation of genetic instructions is orders of magnitude below the plausibility threshold on the gamut of terrestrial physico-chemical interactions; the time necessary to accumulate the observed amounts of specified functional information is orders of magnitude greater than the lifespan of the universe.
    2. Satisfied. Intelligent choice of initial conditions and rules for the functioning of biosystems in analogy to the known complex artificial systems of information processing.
  24. The presence of functionally specific information in biosystems
    1. Accepted. Nucleotide sequences code bio-function.
    2. Accepted. The amount of functionally specific information associated with a given function is proportional to the ratio of the number of sequences coding for that function to the max possible number of sequences [Durston et al. 2007].
  25. The source of functional specific information in biosystems
    1. Spontaneous and law-like causal factors: mutation, recombination, drift, selection. Unsupported empirically.
    2. Choice contingent causality. Strongly empirically supported: apart from biosystems, only complex artefacts such as languages and information processing systems exhibit functional specific information.
  26. Commonality of complex artificial systems and biosystems
    1. Denied.
    2. Determined based on analyses of available observations. Both complex artefacts and biosystems  are systems of semiotic information processing. The functioning of such systems assumes a priori uploading a common alphabet, rules of syntax and semantics for future information exchange between system components. Large amounts of functionally specific information is only observed in complex artificial systems and in living organisms.
  27. Possibility of spontaneous generation of intelligence
    1. Assumed as the only option. Unwarranted empirically.
    2. Denied based on vast empirical evidence and on the analysis of functionally specified information capable of being spontaneously generated on the gamut of probabilistic resources available in a given system.


References

  1. David L. Abel (2009),  The Universal Plausibility Metric (UPM) & Principle (UPP). Theoretical Biology and Medical Modelling, 6:27.
  2. David Abel (2011), The First Gene
  3. Durston, K.K., D.K.Y. Chiu, D.L. Abel and J.T. Trevors (2007)"Measuring the functional sequence complexity of proteins", Theoretical Biology and Medical Modelling 4:47. [doi:10.1186/1742-4682-4-47]
  4. Charles Darwin (1857), On the Origin of Species.
  5. UncommonDescent.com
  6. Wikipedia.

Saturday 20 October 2012

Evolutionist Euthanasia of Science

Albrecht Dürer. St Jerome, 1514.

Evolutionism is unable to adequately explain the apparent design of biological structures, i.e. explain something that is directly observable and amenable to analysis. Well, of course, it does offer explanations but they are overly complex, if at all plausible. It is not polite to mention such things as Occam's razor.

Evolutionist philosophy has done a lot of damage to science and is continuing to corrode it. The world's science today resembles a transnational corporation with its board of directors, sponsors, clients such as pharmaceutical giants and grantomania. No one is interested in scrupulous scientific inquiry and the quest for truth. Paper submission is functioning like a mass production conveyor, no one is responsible for the published results (perhaps except in cases of dissent from the hard line of the Evolutionist Partei). Evolutionist ideology rules in the lab.

Modern science—a successor of the European science of the times of the Industrial Revolution—has departed from its Christian roots. Now that the understanding of the overarching purpose in the existence of the world as well as the recognition of the worth and uniqueness of human life have been lost, to find enough ethical grounds for scientific inquiry is impossible, since only religion provides genuine impetus to ethics and morality. This is why we are experiencing the lowering of standards of scientific research and a decline in its prestige in the young generation.

In its solipsism, evolutionism undermines the philosophical grounds of the scientific method — the objectivity of observation. If I can see apparent design in a biological system, I as a scientist have the right to assume that my observations correspond to reality. The policy of placing artificial bounds on the set of possible causal factors in the genesis of biological systems pursued by evolutionism allegedly in good faith and for the sake of science turns out to be detrimental to it. The basis for any scientific inquiry which is in the connection between a chain of reasoning and the objective reality, is no more. Reality is an illusion. Science drifts towards Buddhism, which poisons many minds especially in the West.

If we value science and its cause, we must reconsider the evolutionist paradigm. The sooner, the better. 

Monday 15 October 2012

Evolution of Biosystems vs. Evolution of Languages

The evolution of biosystems is often likened to the evolution of languages [UncommonDescent]. The two even have common visual representations, i.e. both are thought of as trees (see Figs. 1-2).

Fig.1. Darwinian "tree of life".


Fig.2. Families of Indo-European languages.

In certain cases large quantities of people or animals can behave like non-living matter and consequently their behaviour can be described by the same laws as the ones we use to describe the behaviour of non-living matter. For example, migrations of nations can accurately enough be modelled as viscous fluid flow on a surface [Pickover 2008]. But can we assume that something similar is true with respect to the evolution of languages?


Let us put aside the information-theoretic problems of biological evolution we discussed earlier here and focus on the analogy. I have problems with it (assuming Darwinian evolution). 

The first reason why I am not happy with it is the rules vs. constraints dichotomy highlighted by David Abel in "the First Gene" [Abel 2011].

By definition, semantics defines a set of rules for the interpretation of declared syntactic constructs. Empirical science testifies to the fact that rules are not the same as constraints. Rules in practice are always defined by intelligent decision makers whereas constraints are represented by physico-chemical interactions necessarily present in any physical system. In the case of linguistics, the decision makers are:
  • Communicating people:
    • Individuals influential in the development of a language — such as Sts Cyrill and Methodius for the Slavs, Pushkin for the modern Russian or Chaucer and Shakespeare for the English language — as well as
    • ordinary language speakers.
  • People communicating with machines
    • This is similar to the above. Perhaps, the only difference is that in the latter case the number of decision makers involved is much smaller, while the process itself is a lot more formal. 
Undoubtedly, a spoken language is influenced by large numbers of people over long periods of time but it does not mean we can discount the intelligence of the their agency.

Redefining the rules of syntax interpretation in linguistics is always done by decision makers in a given context. However, in classical Darwinism or in neo-Darwinism, intelligent agency is ruled out.

The second reason why I cannot agree with the analogy is that random variation, drift and natural selection postulated in evolution theories to be major creativity factors are not capable of reliably generating large quantities of functionally specified information, which natural or computer languages exhibit. Massive empirical evidence suggests that the only reliably identifiable source of functionally specified information is intelligence of decision makers creating formal systems (in particular, languages).

References


  1. David Abel (2011): The First Gene
  2. Clifford A. Pickover (2008): Archimedes to Hawking: Laws of Science and the Great Minds Behind Them. Oxford University Press, 2008.
  3. UncommonDescent.com: readers' internet forum.

Thursday 13 September 2012

Amazing Aphid

A colleague showed me today an interesting article on aphids (see here, sorry non-Russian speakers). We discussed the famous experiments by Shaposhnikov. Shaposhnikov, apparently, actually managed to achieve reproductive isolation, one of the two main features of a biological species, the other one being numerous progeny. However, as far as I know, no one else could repeat Shaposhnikov's experiments. Chaikovsky points [1] that Shaposhnikov had to waste a lot of his time trying to defend himself from his hardline Darwinist colleagues. Some say that he nonetheless did continue his research and succeeded in crossing the border of the aphid genus but his later work was never published.


Aphid. Source: Science Photo Library.


Aphids are superorganisms because they actively use endosymbiosis, a special type of  symbiotic interaction whereby organisms live inside one another. It is interesting that, as stated in the above article, aphids are apparently phototrophs, i.e. it is believed that they are able to extract solar energy directly via carotenoid synthesis. If this is true, aphids are the only animal species known today which is capable of synthesizing carotenoids. All other animal species get them from food. Plants betake to a more complex process of converting solar energy which is called photosynthesis, whereby carbon is chemically bound from carbon dioxide in the air and is converted with the help of light into organic compounds. Carotenoid synthesis in aphids is regulated by only seven genes! It is thought that aphids borrowed them by way of a lateral transfer from a fungus species which is believed to have once been their parasite. The acquired genes code up for the enzymes necessary for carotenoid synthesis.


Aphids Infesting a Lupin. Source: Science Photo Library.
References

1. Yu.Chaikovsky. Nauka o razvitii zhisni: Opyt teorii evolutsii (The science of the development of life: An experience of the theory of evolution). Мoscow, КМК, 2006 (in Russian).

Sunday 5 August 2012

Once Again on Uniqueness of Life

In my most recent note on principal differences between chemistry and biochemistry I stressed that biological processes were not limited to their chemical aspects:

Life = Chemistry + Cybernetics

The note concluded that because life and artificial information processing systems have so much in common, we can infer that life has also been designed and implemented by intelligence. Upon reading the note, my Russian colleague argued that my conclusion was far-fetched because life is unique and we therefore have no right to make such conclusions. Here I lay out my arguments a bit further to make my case clearer.

In what sense is life unique? Of course, its chemical side isn't. Leaving artificial systems out of the equation for the moment, the uniqueness of life is in cybernetic control on top of physicality, borrowing the expression from David Abel. In non-living nature cybernetics is not observed. Inanimate matter does not steer a system through different states towards better utility. Inanimate nature is driven only by laws, not by rules. On the other hand, cybernetic control is based on the existence of rules that prescribe which of potential states of an arbitrary system will be chosen next. The choice is made in order to optimise a goal function. That said, an important thing to keep in mind here is that the rules determining the behaviour of the system in two states S1 and S2 are indifferent to the constraints, which are the same in either state.

However, if we look to artificial information processing systems, we will notice their profound commonality with biosystems: either is controlled in order to maintain or drive the system towards a goal state. At the same time, non-living nature is inert (indifferent) to rules or goals.


Rules vs Constraints

Let us clarify this by considering games. Regardless of which cell a chessman occupies on the board, gravity acts upon it in the same way. In any of the 64 cells the constraints on its motion that are determined by the physical laws are exactly the same. The meaning of moves within the board is determined only by the rules and exists as such only in the context of the game. From the point of view of the law of gravity rules have no meaning. In other words, that the knight moves "Г"-wise, while the bishop moves diagonally cannot be explained by the existence of friction, gravity and the reaction force from the board acting on the chessmen.
 

The situation is absolutely the same with living organisms driven by rules written and interpreted as instructions for the ontogenesis of replicating cybernetic systems functioning in the context of physico-chemical constraints (laws). Biosystems are preprogrammed to maintain the goal state of the dynamic equilibrium, i.e. homeostasis. The reading/writing of genetic instructions during replication is executed using specific interpretation rules which are already known.

The fact that to interpret material symbols (codons) as part of transcription/translation of the genetic code biosystems employ this particular protocol cannot be explained exclusively by the physico-chemical properties of molecules participating in these processes. Information processing cannot be reduced to the physical aspects of information transfer. Likewise, the meaning of the note you are now reading can be conveyed not only through the internet but also via printed media or orally as a lecture.

It is cybernetics that complex artificial systems and biological systems have in common

As we said earlier, cybernetic control (i.e. rules that steer a system towards states of better utility) is what unites artificial information processing systems and life. This common feature allows us to conclude by induction that life is also an artefact, in full accordance with scientific rigour. Refusal to admit this obvious commonality between artificial systems and organisms is based on a priori philosophical commitments:

"Life has been intelligently designed and implemented?! It is impossible just because it is impossible."

Here is where science per se reaches the demarcation line giving way to world views.


References

D. Abel, Is Life Unique?
D. Abel, The First Gene.

Thursday 12 July 2012

What's the difference between biochemistry and chemistry, or is DNA just a molecule?

My note "On the operational zero and approximate computations" (available in Russian) was published by a popular Russian website here. A lively discussion followed. Some of the comments were, in my opinion, a result of a lack of understanding on the part of the readers of the semiotic nature of genetic information recording and processing in biosystems. One reader even asserted that DNA was just another molecule. Another molecule? Yes, but... Based on the responses from the readers I wrote this. 

So what is the difference between chemistry and biochemistry? Are DNA/RNA just molecules?

Biochemistry differs by its context and semantics, i.e. the contents of genetic instructions to synthesise biological structures of the next generation organisms. Roughly, the preprogrammed unfolding of an ensemble of biochemical processes has two layers:
  • lower layer А of physico-chemical interactions
  • upper layer B of control over the said interactions

This can be likened to the standard OSI model which is a stack/hierarchy of protocols that are used to organise communication over a computer network. Our layer A corresponds to the physical layer of the OSI stack (fig.1), while layer B is the OSI application layer, i.e. the top layer which is available to various end user applications.

Figure 1. Data flow in the standard OSI stack of protocols that are used to organise communication over a computer network


Information processing in the sender's personal computer or other device starts at the application layer and proceeds down to the physical layer where data is converted to bit streams and finally to voltage meanders in the cable medium. Information processing in the receiver's computer or device is inverse: voltage jumps are detected and transformed into bit streams and further all the way up to the application layer where data is presented in human readable form using an appropriate application. 

For example, you type the address of the web page you want to retrieve in a web browser. The browser processes it converting the address into the IP address of the server where this page resides, requests this page from the server and receives a response containing the contents of the page in the form of, say, HTML data (of course, assuming that the address is correct and that the network and the server are up and running). Finally, your browser processes the received information and presents it to you in readable form.

It is at the application layer that the semantics of the future information transfer is defined. The author of the page types in its contents using a text editor application and uploads it to the server. In order for the page to be read successfully, it must be stored at the known address in the format familiar to your application (in this case, a web browser). 

It is important to emphasise the following points which are essential to organising a successful information exchange between the sender and the receiver:
  • To correctly interpret information stored on a medium it is necessary to ensure compatibility/identity of the protocol of its recording, on the one hand, and of the protocol of its interpretation, on the other.
  • The presentation of information at the various OSI layers is formal. A protocol is a set of predefined rules loaded on top of physico-chemical interactions (on top of physicality, using David Abel's expression). The physico-chemical interactions only determine the constraints within which the system will operate, not rules. 
  • This is why the rules as such are irreducible to physicality. Likewise the rules of upper layers are irreducible to lower layers. More about this irreducibility can be found here.
  • The existence of rules reliably points to a priori intelligent agency of decision makers simply because physicality alone, being inert to rules of any kind, functions only in the confines determined by constraints and, ultimately, by the laws of nature. 

The process of information transfer in a living cell is similar to that in artificial systems, with differences in the way the recording/interpretation protocol is realised. The physical layer (layer A) is presented by biopolymers, i.e. by DNA/RNA molecules which carry nucleotides of four different types : A, C, T in DNA (or U in RNA), and G.
Figure 2. A series of codons in part of a messenger RNA (mRNA) molecule. Each codon consists of three nucleotides, usually representing a single amino acid. The nucleotides are abbreviated with the letters A, U, G and C. This is mRNA, which uses U (uracil). DNA uses T (thymine) instead. This mRNA molecule will instruct a ribosome to synthesize a protein according to this code. Source - Wikipedia, Transfer of information via genetic code.


Triplets of nucleotides called codons (fig.2) are interpreted by a reading polymerase. In protein-coding segments of DNA/RNA a codon usually corresponds to a particular amino acid residue which is synthesised as part of a future protein molecule. Recording/interpretation are initiated and stopped by special start/stop codons. The physical and chemical properties of synthesised proteins are determined by particular amino acid sequences. These sequences representing the application layer (layer B) define the biological meaning of the genetic instructions.


We can see that the reading/writing of genetic information is a formal process, an algorithm, a sequence of instructions given via a material symbol system and stored in biopolymers. Modern science knows no examples of a spontaneous and/or necessary mechanism of formal instruction generation or protocol specification/loading. On the contrary, based on massive evidence, these are properties of artificial information processing systems exclusively

The points we presented in the above discussion of the OSI protocol stack are also perfectly valid in the context of a living cell. The presence of an a priori defined protocol (set of rules) for coding/interpretation of genetic instructions in a biosystem is as necessary as the presence of similar protocols in human designed and implemented information processing systems. Consequently, it is incorrect to reduce life to chemistry, if, obviously, we want to have a good understanding of what life is from the point of view of contemporary science. 


Finally, I will point out that it would be wrong to interpret what I wrote here as an attempt to impart some mystique properties to biopolymer medium as such. Of course, DNA/RNA have chemical properties but that is not it. We would throw the baby out of the bath if we reduced the complex semiotic process of genetic information coding/interpretation exclusively to its chemical side. Borrowing an analogy from Stephen Meyer, one can, of course, say that a newspaper page has nothing but a sporadic collection of paint blobs on it. At the physical level, that is certainly the case. However, it is not all there is to the newspaper article. Similarly, the contents of a conversation cannot be reduced to sound waves spreading from mouth to ear in the air. For the same reason, the functioning of such complex systems as television or radio receivers cannot be explained only by the existence of the electromagnetic field.

As we said earlier, the semantic cargo of a message in all cases of the functioning of complex artificial information processing systems is a consequence of meaningful actions on the part of decision makers. That is why the detection of semantics itself reliably points to intelligent agency behind the functioning of cybernetic systems.

True, there is no mystery in the chemical properties of DNA/RNA. However, the simple fact that at the heart of life, in replication, lies a process which with scientific rigour, certainty and reliability points to intelligence behind it, and, what's more, to intelligence far greater than that of humans, will lead us to the greatest mystery of all. It will lead us to the One who created life in the beginning, of course, if we don't want to remain willfully blind.


References

  1. David Abel, The First Gene. 
  2. Blog and forum Uncommon Descent

Monday 11 June 2012

Biofunctionality: A continent or an archipelago?

Evolutionist: Okay, you have persuaded me that in order for life to kick off, intelligence is required. I accept ID argumentation regarding the necessary minimum complexity of a protobiotic cell. However, I think that unguided evolution adequately explains the observed biological diversity by natural selection over random variation given a self-replicating biosystem to start with. There is no epistemic need to invoke the interference of intelligent actors beyond the start of life to account for the biodiversity we see. Otherwise, it would have  contradicted Occam's rule.

ID proponent: The argumentation of design inference relating to the start of life stays the same in relation to biological evolution that is supposedly responsible for the diversity of biological forms descending from the same ancestor. Qualitative and quantitative differences in the genomes of, say, prokaryotes and eukaryotes (e.g. bacteria, plants and humans) or humans and microorganisms are so big that biological common ancestry in the context of mutations, drift and natural selection is precluded as statistically implausible on the gamut of the terrestrial physico-chemical processes taking into consideration the rates of these processes and a possible realistic maximum number of them over the entire natural history [Abel 2009]. 

True, genomic adaptations do occur and can be observed. However, this does not solve the problem of functionality isolation: complex functionality even in highly adaptable biosystems is necessarily highly tuned so represents islands of function in configuration space. The theory of global common descent posits that this problem does not exist and that all existing or extinct bioforms are nodes of a single phylogenetic graph. However, careful consideration of functionality isolatedness lends grounds for doubt about the tenability of global common descent.

Organisms are complex self-replicating machines which consist of many interrelated subsystems. As machines, biosystems function within certain ranges of their many parameters. The parameter values in a majority of practical cases of complex machines belong to relatively small isolated areas in the configuration space. So even considering the high level of adaptability of living systems, in practice significant functional changes leading to substantial phenotypic differences require deep synchronised reorganisations of all relevant subsystems. Gradualist models similar to [Dawkins 1996] using only random and law-like factors of causation (i.e. random changes in genotype coupled with unguided selection) cannot satisfactorily account for such complex synchronised changes in the general case given the terrestrial probabilistic bounds on physico-chemical interactions. Consequently, the problem of functionality islands vs functionality continent still needs to be addressed.

The main claim of the empirical theory of design inference is that in practice a big enough quantity of functionally specified information (FSI, [Abel 2011]) cannot plausibly emerge spontaneously or gradualistically but statistically strongly point to choice contingent causality (design). Substantially different morphologies (body plans) are explained by substantial differences in the associated FSI.

It is true that in principle it is possible to gain automatically some uncertainty reduction as a result of information transfer over a Shennon channel. E.g. it can happen as a result of a random error correction such as we see in biosystems, but it always happens given an initial and long enough meaningful message, which statistically rules out combinations of chance and law-like necessity.

Nevertheless, information transfer results not only in uncertainty reduction. The problem is that information always carries a semantic cargo in a given context, which is something Shennon's theory does not address. By the way, Shennon himself did not want to label his theory as theory of information for this reason [ibid].

The principal problem of the theory of evolution is the absolute absence of empirical evidence of spontaneous (i.e. chance contingent and/or necessary) generation of semantics in nature. We do not see in practice a long enough meaningful message automatically generated without recourse to intelligence. Nor do we ever see a spontaneous drastic change of semantics given a meaningful initial message. A drastic change of rules governing the behaviour of a system is only possible given specialised initial conditions (intelligent actors and meta-rules prescribing such a change). How then is it possible by way of unguided variations coupled with law-like necessity?


What we can see is small adaptational variations of the already available genetic instructions as allowed by the existing protocols in biosystems. By the way, the known existing genetic algorithms are no refutation of our thesis. This is simply because any such algorithm is (or can be cast to) a formal procedure whereby a function is optimised in a given context. Consequently, any genetic algorithm guides the system towards a goal state and is therefore teleological. Contemporary science knows no ways of spontaneous emergence of formalism out of chaos.

We believe that it is the semantic load that determines the hypothetical passage of a biosystem between functional islands in the configuration space. If this is true, then each such semantic transfer needs purposive intelligent interference. Interestingly, Darwin himself spoke about the descent of all biological forms from one or several common ancestors [Darwin 1859]. If we interpret his words in the sense of a phylogenetic forest (or more precisely - considering lateral genetic transfer - a system of connected graphs with a low percentage of cycles), we can agree with Darwin on condition that each phylogenetic tree determines a separate unique body plan (fig.1). 



Figure 1. A comparative analysis of two hypotheses: common descent vs. common design/implementation. Note that common design/implementation does not exclude local common descent (click to enlarge).


Data available today corresponds much better to the hypothesis of common design/implementation than to the hypothesis of global common descent
  • Firstly, what was thought to be genetic "junk" (i.e. non-protein coding DNA, which constitutes a majority of the genome) and seen as evidence of unguided evolution at work, is at least partly no junk at all but rather carries regulatory functions in processing genetic information [Wells 2011]. 
  • Secondly, the universal homology of the genome can now be interpreted as a consequence of not only common ancestry but also common design/implementation and code reuse in diffent contexts in much the same way as we copy&paste or cut&paste pieces of textual information. Probably the best analogy in this sense could be the process of software creation whereby novelty arises not as a result of random variations on a per-symbol basis which is then fixed by selection based on "whether it works". Clearly, a majority of such mutations will undoubtedly be deleterious to the functioning of the code as a single integrated whole. On the contrary, novelty in software comes about as a result of well thought through functional unit-wise discrete code modifications. 
  • Thirdly, common design is indirectly supported by the sheer absence of examples of spontaneous generation of formalism, functionality, semantics or control (i.e. cybernetic characteristics of biosystems) in non-living nature.

Finally, as regards Occam's rule we need to say the following. According to Einstein's apt phrase, one must make everything as simple as possible, but not simpler. A crude over-simplification which does not take into account some crucial sides of the case in point can never be considered even correct enough, to say nothing of being more parsimonious than complex and more detailed models. Various theories of (unguided) evolution miss out on the fact that microadaptations cannot be extrapolated to explain the body plan level differences. This is because to successfully modify morphology one needs a deep synchronised reorganisation of a whole number of interdependent functional subsystems of a living organism where gradualism does not stand the statistical and information-theoretic challenge. The complexity of such reorganisations grows with the complexity of the biological organisation. On the other hand, the capabilities of functional preadaptations are limited in practice.

That said, even if we do not take into consideration the above complexity and synchronisation challenge, the principle untenability of macroevolution (i.e. untenability of theories where evolution is viewed as the exclusive source of genuine novelty) is in statistical implausibility of emergence hypotheses which assume gradual "crystallisation" of cybernetic control, semantics and formalism in biosystems [Abel 2011].

Sources


  1. David L. Abel (2009),  The Universal Plausibility Metric (UPM) & Principle (UPP). Theoretical Biology and Medical Modelling, 6:27.
  2. David Abel (2011), The First Gene.
  3. Charles Darwin (1859), On the Origin of Species
  4. Jonathan Wells (2011), The Myth of Junk DNA.
  5. Blog and forum UncommonDescent.com

Tuesday 15 May 2012

Are Bird Feathers and Avian Respiration Examples of Irreducibly Complex Systems?

[McIntosh 2009] considers birds' plumage and breathing apparatus as examples of irreducibly complex systems in the sense of Behe [Behe 1996]. Recall that an irreducibly complex system is a functional system which depends on a minimum of components each of which contributes to a function: remove any one or more components and the functionality is lost (also see my note here).

McIntosh argues that organisms, as complex cybernetic nano-machines, could not have possibly been self-organised bottom-up using the Darwinian mechanism which employs unguided natural selection over mutations resulting in multiple intermediate organisms. In such systems as bird plumage, respiration and many others, the removal of any component in practice is either unable to lead to a selectable advantage or lethal.

According to McIntosh, both the plumage and respiratory system

... are examples of the principle of specified functional complexity, which occurs throughout nature. There is no known recorded example of this developing experimentally where the precursor information or machinery is not already present in embryonic form. Such design features indicate non-evolutionary features being involved.

Fig. 1. Schematic of exaptation. Components that were part of irreducibly complex systems in ancestral forms (functions F1 and F2) are included (exapted) in an irreducibly complex system of a descendant form (function F3) as a result of a functional switch.

I would like to comment on this. First, despite the possibility of functional exaptation (fig.1) which leads to a functional switch (e.g. according to a popular evolutionary hypothesis, plumage first served for thermoregulation only, not for flight), in practical situations the probability of multiple coordinated functional switches is, as a rule, vanishingly small. On the other hand, in practice a gradualist sequence of functional switches appears to be implausible given the current estimates of available terrestrial probabilistic resources.

Furthermore, when analysing hypothetical phylogenetic trajectories of biosystems in the configuration space, it is important to realise that terminal points of these trajectories may belong to different and sometimes quite distant from each other attraction basins, in which case the trajectories themselves cross chaos incompatible with life. Also, the capability of functional switching itself is guaranteed by the semiotic processes in living organisms. Consequently, exaptation in practice always takes place in a given biological context and is limited by the capabilities of the built-in information processing systems (see my note here).

In addition, the macroevolutionary scenarios of what might have occurred are too vague in the face of experimental data [Behe 1996]. At the same time, fossil evidence is more against macroevolution than for it: while intermediate forms must really be numerous to give plausible grounds to macroevolution, the available fossils are too scanty and inconclusive [Dembski & Wells 2007].


Of course, macroevolutionary scenarios as explanations are usually on the favourable side (if you do not delve into detail) since it is extremely hard to exclude something as a possibility, which is exactly what the argumentation behind irreducible complexity is aiming to achieve. Indeed the burden of proof is always on the party claiming a theoretical or practical impossibility. This is why in the Roman law innocense is presumed: it is much easier to prove that a suspect could commit a crime rather than could not [Gene 2007].


Anyway, as more experimental data is being collected, teleology as a principle behind the functioning of the biosphere as a whole or individual organisms is becoming more convincing, while the case of unguided evolution is getting weaker. This said, the question of teleology is of course demarcational and therefore is rather philosophical.

Other papers by McIntosh are reviewed here.

References

  1. M. Behe (1996), Darwin's Blackbox.
  2. W. Dembski, J. Wells (2007), The Design of Life.
  3. M. Gene(2007), The Design Matrix.
  4. А.С. MacIntosh (2009), Evidence of Design in Bird Feathers and Avian Respiration, Intern. Jour. of Design & Nature and Ecodynamics, 2(4), pp.154-169. 
  5. Wikipedia, Demarcation problem.

Friday 27 April 2012

Examples of using ID in practice

The statistical theory of Intelligent Design posits the practical possibility of inferring to purposeful intelligent activity towards changing configuration patterns of an arbitrary complex system, given the results of an analysis of these configuration patterns and an independent specification. In other words, based on compelling empirical evidence, ID states that under certain conditions inference to intelligent agency is the best explanation of certain traits.


Design inference is possible given functionally complex patterns of matter that comply with an independently provided specification. However, when intelligent actors generate simple patterns, it is not possible to infer to design using this theory without additional information.


We use the basic idea of Intelligent Design, i.e. the practical possibility of design inference, in many areas of our daily life. In Fig.1 we present an incomplete list of examples of this.

Fig.1. Examples of using intelligence involvement detection in practice.

Saturday 14 April 2012

A Quote from Darwin

If I saw an angel come down to teach us good, and I was convinced from others seeing him that I was not mad, I should believe in design. If I could be convinced thoroughly that life and mind was in an unknown way a function of other imponderable force, I should be convinced. If man was made of brass or iron and no way connected with any other organism which had ever lived, I should perhaps be convinced. But this is childish writing (citation based on "The Design Matrix" by Mike Gene, emphasis added).
This is a quote from a letter by Charles Darwin to Asa Gray, a professor of botany at Harvard. Darwin wrote it in response to Gray's question, what it would take to convince him that life was designed. I think that Darwin's position here is profoundly unscientific. Yes, science encourages skepticism, but doubt is a double-edged sword. A scientist should always remain, to a reasonable extent, skeptical of his own views. A scientist should take pains to analyse all available data fairly and be prepared to follow evidence wherever it leads. 

Now that mathematics has established the existence of empirical limits of reason, such determination against the mysterious and non-formalisable should itself, dialectically, raise scientific doubts. On the other hand, the radicalism of bias is always dangerous as it can make us blind to something essential:  

If they hear not Moses and the prophets, neither will they be persuaded, though one rose from the dead (Luke 16:31).

References

1. M. Gene, The Design Matrix, 2007 (see also the eponymous blog).
2. M. Behe & J. Wells, Then and Only Then.

Wednesday 4 April 2012

Towards Defining Intelligence

Over the recent months during Intelligence Design related discussions at UncommonDescent.com I heard many naturalistically-minded colleagues say that we, ID proponents, cannot adequately define intelligence.

I think that as far as cybernetics is concerned intelligence can be defined as anything that is able to impart functionality to systems. This phenomenological definition is analogous to the definition of gravity in Newton's mechanics as a natural phenomenon by which physical bodies attract with a pull proportional to their mass. Intelligence is the only means of creating functionally complex configurations/patterns of matter that we know of in practice. Figuratively speaking, intelligent agency leaves a trace of specified complexity behind, similarly to ants communicating by a feromone trail. Using the above definition we can get away from the curse of infinite regress that comes as a package deal with naturalistic emergence theories (that is, the so-called theories of self-organisation that have little to do with reality or practicality).

Defining intelligence in this way, we can get away from the vicious circularity of naturalistic emergentism present contemporary self-organisation theories.
  • Emergentism postulates the hypothetical spontaneous emergence of particular properties of systems under certain conditions. E.g. self-organisation assumes unobserved spontaneous generation of formal functionality and cyberbetic control in complex systems.
Another important issue here is the ability to scientifically describe concrete ways by which intelligence can act on matter. I believe that the action of intelligence is inherently heuristic. It is quite interesting to note in defense of this statement, that animals and humans can sometimes solve practical combinatorial problems very efficiently (see e.g. this paper about the efficiency of solving the travelling salesman problem visually by practitioners).

Wednesday 28 March 2012

On Evolutionistic Solipsism

Intelligent Design makes a straightforward and natural claim:
  • If an observed phenomenon is amenable to design inference (that is, if it is complex enough and functional), it is very likely that it was actually intelligently designed. In this case, conscious design is abductively the best explanation of its existence. Take for example the car engine. It has a high level of complexity. Its multiple parts are interrelated and tuned to function properly and it is characterised by formal utility as an integrated whole. We correctly assume that the engine is a product of design. Likewise, we observe formal utility in living organisms and infer to design by Intelligence.
Evolutionism castrates scientific thought and ascribes the generation of functional complexity to chance and necessity only. Consequently, it is a kind of solipsism.
  • Solipsism is a radical philosophical view that accepts one's own individual consciousness as the only thing sure to exist. It consequently denies the objective reality of the world [1].
As Richard Dawkins — perhaps one of the most implacable and aggressive neo-Darwinists today has it, we should think that design is only an apparent cause of life. Evolutionists personify natural selection by comparing it with a watch maker, sculptor, software engineer, poet, composer and architect [6]. However, they always make a mandatory reservation that in reality no such watch maker, sculptor or architect exists in person [5]. In their view, life — the most elaborate masterpiece of all arts — is but a consequence of blind chance and brutal force in varying combinations. A detailed plan of a multi-level building exists without the architect. Software exists without the programmer. Isn't it solipsism? 

As cyberneticist David Abel notes [4], the ability of mathematics — the archetype of formalism — to express the laws of nature tells us not only that human intelligence can formulate and work with powerful mathematical models, but also that our world, at least partially, allows formalisation. The fact that systematic observations point only to intelligence as the cause of formalism in this world suggests that our universe being amenable to formalisation is itself a product of Intelligence. In our egocentric solipsism – Abel continues – we cannot acknowledge this simple truth.

I can only add to this wonderful insight my personal observations. Many intellectuals in the West today are under a tremendous influence of Buddhism. It seems to me that what Abel pointed out is in fact one of the reasons why so many scientists are so entrapped. Buddhism essentially sees everything as an illusion and, by virtue of this, it is inherently solipsistic and thus it has a lot in common with evolutionism. 

I think we have to change the status quo in modern life sciences where agreement with philosophical commitments of a majority of scientists compromises the cause and the methodology of science. The roots of this phenomenon are in the unwillingness of the majority in the scientific community to accept the possibility of interpretations that are not in line with their materialistic way of thinking. This unwillingness, in turn, rests upon evolutionism which has been silently elevated to the rank of dogma, the only permitted mode of explanation. To get out of this dead end we need to rehabilitate:


  • teleology as a legitimate scientific and philosophical position (cf. teleonomy [2,3,7]);
  • and, in particular, Aristotelian choice contingency/purposeful design as an acceptable causal category.
Failure to do so will continue creating obstacles for unprejudiced thought in life sciences in future. 

References

  1. Wikipedia, Solipsism.
  2. Wikipedia, Teleonomy.
  3. Wikipedia, Теория П.К.Анохина функциональных систем (P.K.Anokhin's theory of functional systems, in Russian).
  4. D. Abel, The First Gene.
  5. R. Dawkins, The Blind Watchmaker. 
  6. W. Dembski, J. Wells, The Design of Life.
  7. Egnorance Blog, Teleonomy and Teleology.