Friday, 23 March 2012

Notes on the Margins

  • Teleology is as scientific as non-teleology:
    • The global teleological assumption that the universe has a purpose is as scientific as the contrary non-teleological assumption. That the former has been deprived of its right to be treated as scientific is wrong as far as philosophy of science is concerned. 
    • The status quo in science where materialism de facto enjoys a monopoly on interpretations of scientific findings is harmful to scientific enquiry in the long run.
  • Defending design/choice contingency as a factor of causality:
    • Without choice contingency in the set of causal factors (together with chance and necessity) it is impossible to develop an unbiased scientific understanding of intelligence, consciousness, life, and of the process of creation/functioning of complex artefacts.
    • The expulsion of choice contingency from the scientific method is explained by the said materialistic interpretational monopoly and violates the objectivity of scientific investigations.
  • "Correlation is not causation" [Dembski & Wells 2007]:
    • To say that genome homology proves common descent is logically wrong. It is just one possible interpretation of available data. Another possible interpretation is common design, although common design and common descent are not mutually exclusive. One can argue in favour or against particular interpretations (theories) but one cannot prove a theory. Common design was a very popular interpretation before Darwin. It is regaining its popularity now as more data becomes available.
  • "Science starts with figures" [Behe 1997], [Depew & Weber 2011]:
    • Neo-Darwinism has no underpinning mathematical core. I hazard a guess it will never do simply because it has a very limited predictive power (retrodicting the notorious missing links in geological strata appears inconclusive and unconvincing). On the other hand, mathematical formulas reflect the inherent formal laws underlying physical reality and, consequently, formulas themselves are a form of prediction. 
  • It is only choice contingency that has a genuine creative potential [Abel 2011]:
    • The entirety of scientific and technological experience of humanity suggests that a practically unlimited creativity potential which is usually wrongly attributed to evolution (i.e. to a combination of law-like necessity and chance) is only possible through a purposive choice of:
      • the initial conditions;
      • the architectural design and parameter tuning of the system's functional components;
      • the material symbol system in semiotic information transfer between the components of the system.
    • Law-like necessity and chance are not enough for an algorithm to function. The functioning of algorithms can only be made possible via choice contingency, a special causal factor.
  • Spontaneous generation of novelty per se is not observed [Ewert et. al. 2012]:
    • Genetic algorithms require careful parameter tuning before they can work. They work by merely rearranging various parts of a single whole. The intelligent power of GAs should not be overemphasised.
    • Any algorithm (and GAs in particular) is teleological because it has a goal state to reach. Inanimate matter is inert to goals and consequently it could not have possibly generated the genome. 
    • Self-tuning/learning algorithms, as well as any others, are intelligently created. The self-tuning and learning behaviour is but concentrated and formalised expertise of human professionals. 
  • A working algorithm without the conscious programmer is nonsense:
    • The possibility of there being algorithms capable of working with the genome points to intelligent agency behind that algorithm more reliably than it does to chance and/or necessity: in inanimate unconscious matter formalisms do not spontaneously arise. 
    • If it is impossible to have an algorithm without the programmer, it is ridiculous to suggest that error-correcting and noise-reducing code should come about by trial and error (the generation of novelty is attributed to errors during replication in neo-Darwinian models [Dawkins 1996]).
    • There is a general consensus around the fact that DNA/RNA is in fact code: it has a certain alphabet, syntax and semantics. Anyone who has done at least some programming will agree that adding new functionality is achieved not by mutating the existing code chaotically and relying on the customer to take it or leave it as is (similar to neo-Darwinian models where mutations and natural selection acting on genetic code are believed to lead to novelty). On the contrary, in practice new functionality appears in code as a result of the following formal process: formulation of requirements -> proof of concept -> implementation -> testing. A new module or data structure gets added to the existing code at once (not through symbol-by-symbol or line-by-line trial and error). It happens as a result of consious work of a team of experts. 
      • That novelty is a result of purely (huge) probabilistic resources available to evolution can no longer be accepted. The probabilistic resources (i.e. the total number of realisable configurations over a given amount of time) are in fact limited and this can be easily demonstrated. For the step-by-Darwinian-step accumulation of sufficient genetic information in order to get life going, the whole terrestrial lifespan is not enough by many orders of magnitude, even using very liberal estimates (details here).
      • Hypotheses of non-intelligent extraterrestrial origins of life (e.g. panspermia) in principle cannot resolve the issue of the initial conditions and complexity required for living organisms: contemporary estimates of the age of the universe and of the age of the Earth are of the same order of magnitude (1017 seconds).
  • Information exchange in semiotic systems (biosystems included) is impossible without preliminarily setting a common [Виолован 1997]:
    • Alphabet;
    • Syntax;
    • Protocol of information coding/decoding including an agreement about semantics between the sender and the receiver.
  • Life comes only from life (the main principle of vitalism):
    • We have absolutely no evidence of the emergence of life from non-life. Such a scenario is better suited for Sci-Fi rather than for science.
    • Abiogenesis was seriously put to doubt as a result of the work by Luis Pasteur, who in 1860-62 demonstrated the impossibility of self-organisation of micro-organisms.
  • The spontaneous generation of intelligence is also Sci-Fi rather than science:
    • The scientific and technological progress of humanity testifies to "the golden rule" that everything comes at a price. We pay a lot to get but a little. So machines will stay machines and will never be able to replace human intelligence entirely.
    • If that is true and the strong AI hypothesis does not hold, then human intelligence did not come about spontaneously either.
  • Non-function does not by itself become function. To claim the opposite would be equivalent to saying it is possible for a man to pull himself by the hair to get out of a swamp:
    • The spontaneous emergence of function out of non-functional components is not observed in nature. Here functionality means orientation to purpose, usefulness or utility of parts of a single whole [Abel 2011].
    • Inanimate nature is inert to purpose or function. It does not care whether things are functional or not [ibid].
      • In case someone wonders, preadaptation is a different story as it involves switching between functions in a given biological context rather than the emergence of function out of non-function.
  • The universe is fine-tuned for intelligent life (the strong anthropic principle, see [Behe et al. 2000]):
    • The multiverse hypothesis is non-scientific by definition since the scientific method assumes direct observation and analysis of things happening in our world. So an indirect hypothetical increase in probabilities of incredibly implausible events in this world as a result of the introduction of parallel universes does not count.
    • The weak anthropic principle attempts to dismiss the need to explain the fine-tuning of our universe by positing that "if man had not been around, there would not have been any observers of fine-tuning". This is not a way out either because correlation is not causation, as we pointed out earlier. The weak anthropic principle fails to show that the two following questions need not be answered:
      • Why is there an unbelievably lucky coincidence of universal constants?  and
      • Why do they need to be so subtly tuned?

Bibliography

  1. D. Abel (2011), The First Gene. 
  2. M. Behe (1997), Darwin's Blackbox.
  3. Michael J. Behe, William A. Dembski and Stephen Meyer (2000), Science and Evidence of Design in the Universe, Ignatius Press. 
  4. R. Dawkins (1996), The Selfish Gene.
  5. W. Dembski and J. Wells (2007), The Design of Life.
  6. David J. Depew and Bruce H. Weber (2011), The Fate of Darwinism: Evolution After the Modern Synthesis, Biological Theory, 2011, Volume 6, Number 1, Pages 89-102.
  7. W. Ewert, W. Dembski, R. Marks (2012), Climbing the Steiner Tree—Sources of Active Informationin in a Genetic Algorithm for Solving the Euclidean Steiner Tree Problem, Bio-Complexity Journal (open access).
  8. К. Виолован (1997), Проблемы абиогенеза как ключ к пониманию несостоятельности эволюционной гипотезы. Просветительский центр "Шестоднев" (in Russian).

No comments:

Post a Comment