Tuesday, 30 October 2012

An interesting paper on modelling biosystems with cellular automata

It is known that biosystems can be modelled by cellular automata (CA). Cellular automata are regular structures in an N-dimensional space which are composed of finite size cells that each can be in one of a number of states. There are also a number of rules specified for state transitions. John von Neumann showed that there exist cellular automata capable of modelling biosystems. These automata are dynamically stable, self-replicating and Turing-equivalent (i.e. they are in fact a universal computer).

Interestingly, in the configuration spaces of biosystems there can be isolated functional zones amid chaos. When you think of this, this is actually expected because in reality a complex enough functional system is bound to have zones of function-compatible parameter values. The relative size of such a zone tends to be smaller as the number of parameters and their value criticality for normal operation of the system grow. The same holds for biomachines even taking into consideration their high adaptability. This is what undermines both classical Darwinism and the synthetic evolution. If two given taxa belong to two separate parameter zones, there is no selectable Darwinian path from one to the other.

Earlier we discussed the options for the blind Darwinian search in a configuration space. Here is another very interesting paper on the possibility of reaching target zones by Darwinian means (mutations + drift + natural selection) [Bartlett 2008], which summarises the results of mathematical modelling of the evolution of biosystems using cellular automata. 4 complexity classes of CAs have been identified:
  1. automata that always arrive at a homogenous state no matter what their initial state was;
  2. automata that have a finite sphere of influence for outcomes even if the computation was carried out an infinite number of steps;
  3. chaotic systems, where initial states do not have either a bounded sphere of influence nor do they produce predictable results. However, they do have stable statistical properties;
  4. automata that exhibit a hybrid of both the periodic behavior of Class 2 systems and the chaotic nature of Class 3 systems. More importantly, they are unpredictable both in their exact outcomes as well as well as in their statistical properties. These CAs are Turing-equivalent.
Since class 4 systems are chaotic to some degree, the mapping between changes in code and changes in the outcome (genotype-phenotype correlation) is also chaotic. Consequenly, in this case a Darwinian selectable path from one taxon to another does not exist.

An interesting property of class 4 automata is the dependence of the degree of chaotisation and of evolvability on the implementation language. Phenomena similar to irreducible complexity [Behe 1996] are, as a matter of fact, not invariant with respect to the implementation language. According to Bartlett, the existence of irreducible complexity and similar phenomena points to the existence of a higher order genome evolution control. Notwithstanding the evolvability of feedback loops, as Bartlett points, they must be initially loaded into biosystems, since feedback loops never arise spontaneously in reality. In conclusion, we just point out that the presence of control over evolutionary processes is, in some sense, the demise of evolution per se (never mind the inevitable oxymoronic phrase "evolutionary process") [Abel 2011], since evolution by definition is undirected.


References
  1. David Abel (2011): The First Gene.
  2. Jonathan Bartlett (2008): Wolfram's Complexity Classes, Relative Evolvability, Irreducible Complexity, and Domain-Specific Languages. Occasional Papers of the BSG 11:5.
  3. Michael Behe (1996): Darwin's Blackbox.

Friday, 26 October 2012

What is this functionally specified information after all?

From complexity theory we know that any object can be represented by a string of symbols from a universal alphabet. So we shall concentrate on strings for convenience and clarity. Consider three strings of the same length (72 ASCII characters):
  1. "jjjbjjjbjjjbjjjbjjjbjjjbjjjbjjjbjjjbjjjbjjjbjjjbjjjbjjjbjjjbjjjbjjjbjjjb"
  2. "4hjjjqq,dsgqjg8 ii0gakkkdffeyzndkk,.j fnldeeeddzpGCZZaQ12 9nnnsskuu6 s."
  3. "Take 500 g. of flour, 250 ml of water, 2 tsp. of sugar, 1egg. Mix, whip."

String 1 exhibits high periodicity, it is Kolmogorov simple (highly compressible), monotonic, redundant and highly specific.


String 2 is a collection of random symbols (well, we assume they are genuinely random for the sake of argument). String 2 is a lot more complex than String 1, non-compressible and lacks specificity.


String 3 is from a food recipe and is therefore a message unlike the other two strings. Importantly, string 3 carries associated functionally specified information. Failure to recognise this fact leads to a widespread error which claims that strings 1, 2 and 3 are equiprobable in the hypothetical process of spontaneous information generation that might have raised biological function. In our example, the function associated with string 3 is the instruction to cook that dish.


Is it possible to rigorously define functionally specified information associated with a symbolic string? Is it possible to measure it? The answer is yes to both questions. In the literature on the subject [Hazen et. al 2007, Szostak 2003], information associated with a specific function f is defined as follows:


Here M(f) > 0 is the number of strings prescribing function f, W is the total number of possible strings of or up to a given length. Clearly, M(f) ≤ W. We can see that reducing specificity (raising the ratio to 1.0) reduces the amount of specified information and vice versa.

Note that specificity of information in a string of symbols determines the probability distribution of a given symbol appearing at various positions in the string. E.g. if we move the symbols "Take 500 g. of flour" from the beginning to the end of string 3, its functional integrity will be compromised. Expectedly, Douglas Axe [Axe 2004, Axe 2010a,b] came to the same conclusions after conducting a series of experiments aimed at studying the properties of protein domain functionality in bacteria: the functionality of protein domains has been found to be deeply isolated in the configuration space, whereas the number of amino acid residue permutations is hugely greater than the number of permutations where the given function is preserved. Neither highly periodic strings of type 1, nor random strings of type 2 are capable of carrying practically significant amounts of specific functional information.


The above is not taken into consideration in various experimental attempts to imitate the hypothetical spontaneous function generation, for example, in the form of meaningful text. I saw somewhere on the internet a discussion about an allegedly successful random generation of the first 24 consecutive symbols of a Shakespearian comedy. Assuming such a sequence was generated, the real question is what the generation algorithm was given for granted. The major problem with this kind of numerical experiments is that they a priori assume the existence of an interpretation protocol uploaded into the system (here it is the English language). Besides, computer code (e.g. the notoriously known Weasel program by R. Dawkins) in these experiments is often tuned to drive the search towards the goal state in the form of some chosen phrase. More subtle ways exist to sneak information about the solution distribution into the search. It is possible to implicitly drive the search even when the target phrase is not given in advance. As a result, the search is still made aware of the likelihood of meeting a solution in different areas of search space. E.g. given a structure of words and sentences in a language we have information about the mean frequencies of various letters which can be used in building random phrases. For more detail on how one can use during search so called active information about solution distributions in the search space, see [Ewert et al. 2012].


Earlier we pointed out that strings are useful mathematical representations of reality. Consequently, various configurations of actual material systems also carry certain quantities of functionally specified information. And this information can be measured, which is what is already being done. [Durston et al. 2007] proposed a method to quantify functionally specified information in proteins based on the reduction in functional uncertainty in functional sequences of nucleotides compared to a null state where any nucleotide sequence is equiprobable (total loss of function).


An objective analysis of functionally specified information found in nature leads to the following conclusions. High quantities of functionally specified information are detected only in human artefacts (such as complex information processing systems, natural or computer languages) and in biosystems. Consequently, we are entitled to infer by induction that life's origin is also artificial. This scientific inference can as a matter of course be debunked experimentally. To disprove it, it is sufficient to
empirically demonstrate that:
  • There exists a mechanism utilizing only spontaneous and law-like causal factors, which is capable of generating and applying a protocol of information processing in a multipart system;
  • This mechanism can spontaneously generate and unambiguously interpret long enough instructions in agreement with the protocol; the length of instructions in a language compatible with the protocol should at least be equivalent to 500 bits of information, as is the case with 72 ASCII characters (with the exception of special characters) of meaningful text such as string 3 in the above example. For biosystems, according to Durston, granting the highest possible replication rate for the entire span of natural history (4.5 billion years) the information threshold is 140 bits (20 ASCII text symbols, respectively).

References

  1. D. Axe (2004) Estimating the Prevalence of Protein Sequences Adopting Functional Enzyme Folds, Journal of Molecular Biology,Volume 341, Issue 5, 27 August 2004, Pages 1295-1315.
  2. D. Axe (2010a) The Case Against a Darwinian Origin of Protein Folds, Biocomplexity Journal.
  3. D. Axe (2010b) The Limits of Complex Adaptation: An Analysis Based on a Simple Model of Structured Bacterial Populations. Biocomplexity Journal.
  4. Durston, K.K., D.K.Y. Chiu, D.L. Abel and J.T. Trevors (2007) Measuring the functional sequence complexity of proteins", Theoretical Biology and Medical Modelling 4:47. [doi:10.1186/1742-4682-4-47]
  5. W. Ewert, W. Dembski, R. Marks (2012) Climbing the Steiner Tree—Sources of Active Informationin in a Genetic Algorithm for Solving the Euclidean Steiner Tree Problem, Bio-Complexity Journal.
  6. Hazen R.M., Griffen P.I., Carothers J.M., Szostak J.W. (2007) Functional information and the emergence of biocomplexity, PNAS, 104:8574-8581.
  7. Szostak JW (2003) Functional information: Molecular messages. Nature 2003, 423:689.

Tuesday, 23 October 2012

On Two Competing Paradigms

We briefly compare two different systems of axioms: the evolutionist (emergentist) paradigm and the one based on intelligent design (ID) detection. The list below reveals the weaknesses of evolutionism as a philosophical basis of life sciences. The paradigm based on the theory of intelligent design detection, in contrast, appears to be more powerful and more generic. The evolutionary mechanisms play here a secondary role and are quite limited, which reflects reality better. From the science philosophical point, the fact that the problem of apparent design in biosystems is adequately addressed is a definite strength of ID. Evolutionism, on the contrary, claims that design in biosystems, which is obvious to an unbiased observer, is but an illusion. Since evolutionism does not include choice contingent causality in the set of legitimate causal factors, its axiomatics cannot stand the challenges of information theory and cybernetics. In contrast, the inclusion of choice as a causal factor—as is done in ID—allows us to fully appreciate the commonality between complex artificial systems of information processing and living organisms in that they share cybernetic control, formalism and semiosis.

  • Apologies for the long list. Unfortunately, tables here at blogspot.co.uk are not parsed correctly by Internet Explorer.

  1. Causal factors used in scientific analysis
    1. Evolutionism: Chance contingency, necessity. The legitimacy of choice is denied.
    2. Intelligent Design Detection: Choice contingency, chance contingency, necessity.
  2. Infinite regress
    1. Yes.
    2. No (on condition that intelligence is assumed to transcend matter).
  3. The problem of initial conditions
    1. Needs to be solved. The probability of randomly hitting a functional target zone in a vast configuration space is below the plausibility threshold on the gamut of terrestrial interactions.
    2. Solved.
  4. Design of biosystems
    1. Claimed an illusion.
    2. Acknowledged in principle.
  5. Teleology and goal setting
    1. Local. The existence of a global goal, foresight or planning is denied on all levels be it in the universe or in human life.
    2. Local and global. Hierarchies of goals. The goal of an object is thought extraneous to it in relation to other objects.
  6. Forecasting
    1. Problematic and only retrospective.
    2. Possible.
  7. Origin of control, formalism and hierarchy in complex systems
    1. Spontaneous. No empirical data exists to support the claim. Statistical implausibility of spontaneous generation thereof is illustrated by the infinite monkey theorem.
    2. Purposive intelligent generation supported by massive observation (complex artificial systems and bioengineering).
  8. Primacy of constraints vs. rules
    1. Constraints are primal to rules.
    2. Rules are primal to constraints.
  9. Spontaneous generation of regular configurations (self-ordering) in open thermodynamic systems
    1. Accepted.
    2. Accepted.
  10. Possibility of detection of intelligent agency in the origins of biosystems
    1. Denied.
    2. Accepted.
  11. Adaptational and preadaptational mechanisms and the possibility of their change
    1. Accepted. The changes are postulated unguided.
    2. Accepted. The possibility and the limits of change are determined by the uploaded meta-rules.
  12. Image of the biosphere in the configuration space
    1. A continent of functionality. Plasticity of biosystems [Darwin 1857].
    2. An archipelago: islands of function in the ocean of chaos. Adaptational movement is limited to occur within particular islands.
  13. Geometric image
    1. A single phylogenetic tree (classical Darwinism) or a network (considering epigenetic phenomena).
    2. A phylogenetic forest (short trees) or a set of low-cycle graphs (considering epigenetic phenomena).
  14. Common descent
    1. Postulated (sometimes the possibility of several common ancestors is acknowledged ). Genome homology is used as proof/
    2. Accepted in principle. Genome homology can alternatively be viewed as a result of common design (by analogy to software design process or to text writing).
  15. The strong anthropic principle
    1. An illusion. The problem of fine-tuning the universal constants for compatibility with life is claimed non-existent.
    2. The necessity to solve the problem is acknowledged. Methodological means to solve it are provided.
  16. The problem of defining and studying the phenomenon of intelligence
    1. Reduced to the basic physico-chemical interactions (physicality).
    2. Addressed in its own right without reduction to physicality. Information semantics cannot and is not reduced to semiotics or physicality. The meaning of a message cannot be reduced to the mere physics of the information channel.
  17. The problems of psychology and human psychics
    1. Induced by the basic physico-chemical factors. Feelings, cognitive and rational activities are ultimately reduced to the four basic physical interactions.
    2. Include the physico-chemical aspect but cannot be exhausted by it. Solved in their own right in view of their irreducibility to physicalty. The hierarchy of complexity of reality is acknowledged. The succession non-living matter-> life-> intelligence-> consciousness is analogous to the hierarchy classes of decision problems (see here).
  18. Free will
    1. An illusion. In essence, it is denied since all is determined by chance and necessity and, ultimately, by the four basic physical interactions.
    2. Acknowledged since legitimate causality includes choice contingency.
  19. Creativity
    1. Practically non-existent since it is substantially limited by the probabilistic resources available in a given system while creative causality is entirely represented by chance alone. Necessity of selection acts passively and therefore cannot be a factor of creativity.
    2. Possible in the full sense by means of impacting functionally specified information to systems via programming the configurable switch bridge [Abel 2011].
  20. Origin of life
    1. Abiogenesis: spontaneous generation of protobiological structures from inorganic compounds.
    2. Purposive generation of initial genomes, tuning the homeostatic state, replication, reaction to stimuli, etc. Uploading the protocols of genetic instruction interpretation as well as meta-rules controlling the limits of adaptive change in those protocols.
  21. Isolatedness and extreme rarity of functional protein domains
    1. The model to explain functional domain generation includes mutations, drift and selection. Does not satisfy the statistic plausibility criterion. Extreme rarity and deep isolation of biofunction in the configuration space in practice rules out incremental solution generation by blind unguided search on the gamut of terrestrial physico-chemical interactions. This problem is inherently related to the problem of initial conditions (an unacceptably low probability of hitting the target zones of parameters compatible with life).
    2. Purposive intelligent generation of functional domains. Its proof of concept is presented by bioengineering.
  22. The role of mutation, recombination, selection and drift
    1. Primary in explaining all observed biodiversity.
    2. Secondary, adaptational within the given higher taxa.
  23. The universal plausibility criterion [Abel 2009]
    1. Not satisfied: the probability of spontaneous generation of genetic instructions is orders of magnitude below the plausibility threshold on the gamut of terrestrial physico-chemical interactions; the time necessary to accumulate the observed amounts of specified functional information is orders of magnitude greater than the lifespan of the universe.
    2. Satisfied. Intelligent choice of initial conditions and rules for the functioning of biosystems in analogy to the known complex artificial systems of information processing.
  24. The presence of functionally specific information in biosystems
    1. Accepted. Nucleotide sequences code bio-function.
    2. Accepted. The amount of functionally specific information associated with a given function is proportional to the ratio of the number of sequences coding for that function to the max possible number of sequences [Durston et al. 2007].
  25. The source of functional specific information in biosystems
    1. Spontaneous and law-like causal factors: mutation, recombination, drift, selection. Unsupported empirically.
    2. Choice contingent causality. Strongly empirically supported: apart from biosystems, only complex artefacts such as languages and information processing systems exhibit functional specific information.
  26. Commonality of complex artificial systems and biosystems
    1. Denied.
    2. Determined based on analyses of available observations. Both complex artefacts and biosystems  are systems of semiotic information processing. The functioning of such systems assumes a priori uploading a common alphabet, rules of syntax and semantics for future information exchange between system components. Large amounts of functionally specific information is only observed in complex artificial systems and in living organisms.
  27. Possibility of spontaneous generation of intelligence
    1. Assumed as the only option. Unwarranted empirically.
    2. Denied based on vast empirical evidence and on the analysis of functionally specified information capable of being spontaneously generated on the gamut of probabilistic resources available in a given system.


References

  1. David L. Abel (2009),  The Universal Plausibility Metric (UPM) & Principle (UPP). Theoretical Biology and Medical Modelling, 6:27.
  2. David Abel (2011), The First Gene
  3. Durston, K.K., D.K.Y. Chiu, D.L. Abel and J.T. Trevors (2007)"Measuring the functional sequence complexity of proteins", Theoretical Biology and Medical Modelling 4:47. [doi:10.1186/1742-4682-4-47]
  4. Charles Darwin (1857), On the Origin of Species.
  5. UncommonDescent.com
  6. Wikipedia.

Saturday, 20 October 2012

Evolutionist Euthanasia of Science

Albrecht Dürer. St Jerome, 1514.

Evolutionism is unable to adequately explain the apparent design of biological structures, i.e. explain something that is directly observable and amenable to analysis. Well, of course, it does offer explanations but they are overly complex, if at all plausible. It is not polite to mention such things as Occam's razor.

Evolutionist philosophy has done a lot of damage to science and is continuing to corrode it. The world's science today resembles a transnational corporation with its board of directors, sponsors, clients such as pharmaceutical giants and grantomania. No one is interested in scrupulous scientific inquiry and the quest for truth. Paper submission is functioning like a mass production conveyor, no one is responsible for the published results (perhaps except in cases of dissent from the hard line of the Evolutionist Partei). Evolutionist ideology rules in the lab.

Modern science—a successor of the European science of the times of the Industrial Revolution—has departed from its Christian roots. Now that the understanding of the overarching purpose in the existence of the world as well as the recognition of the worth and uniqueness of human life have been lost, to find enough ethical grounds for scientific inquiry is impossible, since only religion provides genuine impetus to ethics and morality. This is why we are experiencing the lowering of standards of scientific research and a decline in its prestige in the young generation.

In its solipsism, evolutionism undermines the philosophical grounds of the scientific method — the objectivity of observation. If I can see apparent design in a biological system, I as a scientist have the right to assume that my observations correspond to reality. The policy of placing artificial bounds on the set of possible causal factors in the genesis of biological systems pursued by evolutionism allegedly in good faith and for the sake of science turns out to be detrimental to it. The basis for any scientific inquiry which is in the connection between a chain of reasoning and the objective reality, is no more. Reality is an illusion. Science drifts towards Buddhism, which poisons many minds especially in the West.

If we value science and its cause, we must reconsider the evolutionist paradigm. The sooner, the better. 

Monday, 15 October 2012

Evolution of Biosystems vs. Evolution of Languages

The evolution of biosystems is often likened to the evolution of languages [UncommonDescent]. The two even have common visual representations, i.e. both are thought of as trees (see Figs. 1-2).

Fig.1. Darwinian "tree of life".


Fig.2. Families of Indo-European languages.

In certain cases large quantities of people or animals can behave like non-living matter and consequently their behaviour can be described by the same laws as the ones we use to describe the behaviour of non-living matter. For example, migrations of nations can accurately enough be modelled as viscous fluid flow on a surface [Pickover 2008]. But can we assume that something similar is true with respect to the evolution of languages?


Let us put aside the information-theoretic problems of biological evolution we discussed earlier here and focus on the analogy. I have problems with it (assuming Darwinian evolution). 

The first reason why I am not happy with it is the rules vs. constraints dichotomy highlighted by David Abel in "the First Gene" [Abel 2011].

By definition, semantics defines a set of rules for the interpretation of declared syntactic constructs. Empirical science testifies to the fact that rules are not the same as constraints. Rules in practice are always defined by intelligent decision makers whereas constraints are represented by physico-chemical interactions necessarily present in any physical system. In the case of linguistics, the decision makers are:
  • Communicating people:
    • Individuals influential in the development of a language — such as Sts Cyrill and Methodius for the Slavs, Pushkin for the modern Russian or Chaucer and Shakespeare for the English language — as well as
    • ordinary language speakers.
  • People communicating with machines
    • This is similar to the above. Perhaps, the only difference is that in the latter case the number of decision makers involved is much smaller, while the process itself is a lot more formal. 
Undoubtedly, a spoken language is influenced by large numbers of people over long periods of time but it does not mean we can discount the intelligence of the their agency.

Redefining the rules of syntax interpretation in linguistics is always done by decision makers in a given context. However, in classical Darwinism or in neo-Darwinism, intelligent agency is ruled out.

The second reason why I cannot agree with the analogy is that random variation, drift and natural selection postulated in evolution theories to be major creativity factors are not capable of reliably generating large quantities of functionally specified information, which natural or computer languages exhibit. Massive empirical evidence suggests that the only reliably identifiable source of functionally specified information is intelligence of decision makers creating formal systems (in particular, languages).

References


  1. David Abel (2011): The First Gene
  2. Clifford A. Pickover (2008): Archimedes to Hawking: Laws of Science and the Great Minds Behind Them. Oxford University Press, 2008.
  3. UncommonDescent.com: readers' internet forum.