AE BB DB Explorer

Search Terms (separate with commas, no spaces):

form_srcid: ExYECer

form_srcid: ExYECer

form_cmd: view_author

Your IP address is

View Author detected.

view author posts with search matches:

Retrieve source record and display it.

Your IP address is


form_srcid: ExYECer

q: SELECT AUTHOR, MEMBER_NAME, IP_ADDR, POST_DATE, TOPIC_ID, t1.FORUM_ID, POST, POST_ID, FORUM_VIEW_THREADS from ib_forum_posts AS t1 LEFT JOIN (ib_member_profiles AS t2, ib_forum_info AS t3) ON (t1.forum_id = t3.forum_id AND = t2.member_id) WHERE MEMBER_NAME like 'ExYECer%' and forum_view_threads LIKE '*' ORDER BY POST_DATE ASC


DB_result: Resource id #6

Date: 2002/05/18 16:46:53, Link
Author: ExYECer
I have contacted you and will be more than willing to describe what happened to me on the ISCID brainstorm boards, even though this may mean permanent banishment per Dembski's orders.

Date: 2002/05/25 12:35:26, Link
Author: ExYECer
Quote (theyeti @ May 23 2002,16:32)
Awhile back I had a discussion about the DI's wacko response to the PBS evolution series here.  Scroll down a few messages.  There are some links to some good recent papers in that thread.  The references are:

Trends in Biochemical Sciences 2001, 26:591-596

Proc Natl Acad Sci U S A 1995 Mar 92:2441-5

J. Biol. Chem., Vol. 276, Issue 10, 6881-6884, March 9, 2001

Annu. Rev. Biochem. 2000. 69:617-650.


Let me add some links to the references

Proc. Natl. Acad. Sci. USA, Vol. 97, Issue 15, 8392-8396, July 18, 2000 Interpreting the universal phylogenetic tree, Carl R. Woese

This article was referenced by others, a few quotes


Archaeal Phylogeny Based on Ribosomal Proteins
Oriane Matte-Tailliez , Céline Brochier , Patrick Forterre and Hervé Philippe

Until recently, phylogenetic analyses of Archaea have mainly been based on ribosomal RNA (rRNA) sequence comparisons, leading to the distinction of the two major archaeal phyla: the Euryarchaeota and the Crenarchaeota. Here, thanks to the recent sequencing of several archaeal genomes, we have constructed a phylogeny based on the fusion of the sequences of the 53 ribosomal proteins present in most of the archaeal species. This phylogeny was remarkably congruent with the rRNA phylogeny, suggesting that both reflected the actual phylogeny of the domain Archaea even if some nodes remained unresolved. In both cases, the branches leading to hyperthermophilic species were short, suggesting that the evolutionary rate of their genes has been slowed down by structural constraints related to environmental adaptation. In addition, to estimate the impact of lateral gene transfer (LGT) on our tree reconstruction, we used a new method that revealed that 8 genes out of the 53 ribosomal proteins used in our study were likely affected by LGT. This strongly suggested that a core of 45 nontransferred ribosomal protein genes existed in Archaea that can be tentatively used to infer the phylogeny of this domain. Interestingly, the tree obtained using only the eight ribosomal proteins likely affected by LGT was not very different from the consensus tree, indicating that LGT mainly brought random phylogenetic noise. The major difference involves organisms living in similar environments, suggesting that LGTs are mainly directed by the physical proximity of the organisms rather than by their phylogenetic proximity

Proc. Natl. Acad. Sci. USA, Vol. 98, Issue 3, 805-808, January 30, 2001 The universal nature of biochemistry  Norman R. Pace

Next reference

Lluís Ribas de Pouplana, Paul Schimmel, Aminoacyl-tRNA synthetases: potential markers of genetic code development, Trends in Biochemical Sciences 26 (10) (2001) pp. 591-596.

Operational RNA code for amino acids in relation to genetic code in evolution.  Ribas de Pouplana, L ... Schimmel P
J Biol Chem 2001 Mar 9;276(10):6881-4.

Some links to research

AMINOACYL-TRNA SYNTHESIS  Annu. Rev. Biochem. 2000 , Vol. 69: 617-650.


Aminoacyl-tRNAs are substrates for translation and are pivotal in determining how the genetic code is interpreted as amino acids. The function of aminoacyl-tRNA synthesis is to precisely match amino acids with tRNAs containing the corresponding anticodon. This is primarily achieved by the direct attachment of an amino acid to the corresponding tRNA by an aminoacyl-tRNA synthetase, although intrinsic proofreading and extrinsic editing are also essential in several cases. Recent studies of aminoacyl-tRNA synthesis, mainly prompted by the advent of whole genome sequencing and the availability of a vast body of structural data, have led to an expanded and more detailed picture of how aminoacyl-tRNAs are synthesized. This article reviews current knowledge of the biochemical, structural, and evolutionary facets of aminoacyl-tRNA synthesis.

Date: 2002/09/22 10:01:39, Link
Author: ExYECer
I am not sure if these have been mentioned but an interesting 'discussion' between Wells and Musgraeve can be found at Peppered moths


…Using caged moths, Mikkola (1984) observed that `the normal resting place of the Peppered Moth is beneath small , more or less horizontal branches … probably high up in the canopies, and the species probably only exceptionally rests on tree trunks…'
…In twenty-five years of field work, Clarke (1985) and his colleagues found only one peppered moth on a tree trunk…

…in the 1980s…biologists found that in the wild peppered moths do not rest on tree trunks…


   (2) Even if the correct number were 168 rather than 6, this would still represent only a tiny percentage of the tens of thousands of peppered moths studied by field researchers between the 1950s and 1990s.


Exposed tree trunks versus tree trunks

   (3) I do not claim that peppered moths NEVER rest on tree trunks, but only that they do not NORMALLY rest on tree trunks in the wild. This is the conclusion of everyone who has studied the natural resting-places of peppered moths, including Majerus. In addition to the conclusions you already cite from my work, I could add the following from Majerus's book: "Peppered moths do not naturally rest in exposed positions on tree trunks.... Data on the natural resting sites of the peppered moth are pitifully scarce, and this in itself suggests that peppered moths do not habitually rest in exposed positions on tree trunks.


And this incredible ignorant comment

  Finally, Thomas claims - without mentioning specifics - that I misrepresent a 1985 paper by Clarke, Mani & Wynne.  Clarke et al. (1985) wrote that "all we have observed is where the moths do NOT spend the day.  In 25 years we have found only two" - one on a tree trunk, and another on a wall near a mercury vapor trap.  To appreciate the significance of this - and the numbers cited in the other papers - it is helpful to note that Steward (1977) listed 52 studies conducted between 1952 and 1974, involving a total of 8,426 peppered moths.  Clearly, the one moth reported by Clarke et al. (1985), and the six moths reported by Majerus (1998) as resting on exposed tree trunks, represent only a vanishingly small percentage of all peppered moths studied.

Date: 2002/09/25 09:07:10, Link
Author: ExYECer
I am not sure how strong the case is. In NFL it seems to me that Dembski has redefined Behe's original statement by focusing on _original function_  or _basic function_. The problem is that once intermediates of a different function are allowed that IC systems seem to lose their relevance. Only if it can be argued that an IC system has to maintain its original function in its intermediates can one argue ala Dembski/Behe.

Allowing different intermediate functions opens the door even wider for evolutionary mechanisms.

Dembski's NFL arguments surely suggest a new variant though I would disagree with Dembski about Behe's original IC claim.


The concept of an "invariant" was introduced, something which falsifies a hypothesis, or posit. Irreducible complexity, Dembski said, is Darwin's invariant, but only when defined in the right way. Dembski then "tightenened Behe's invariant" and talked about many of the standard criticisms of irreducible complexity (scaffolding can support IC structures, co-optation, remove 2 parts and function is restored, etc), and presented a new revised version of irreducible complexity:

• 1. Removal of one part destroys original function.
• 2. Removal of multiple parts kills system's original function
• 3. System has numerous complex interacting parts
• 4. System is minimally complex in relation to its minimal function for selective advantage.


Seems that Dembski admits that a redefinition of Behe's original function was needed.

Yet Dembski also argues


Dembski discussed Michael Behe and his "irreducible complexity" approach to ID, and mentioned that Behe is coming to Albuquerque next March. Amazingly, Dembski described several effective critiques of Behe's ideas, including scaffolding, co-aptation, redundant complexity, ignorance of the scientific literature, appeals to ignorance, incremental indispensability, and reducible complexity. But Dembski blew off all the criticisms, as if simply mentioning their existence effectively counters them. Here's an example: Dembski mentioned scaffolding and the Roman arch example, which Massimo Pigliucci happened to have presented to NMSR at his talk on August 11th. Here, a mound is built, then stones are place on top to make the arch; once the last stone is in place, the mound is removed, leaving the arch in apparent "irreducible complexity" (take away any stone and the whole thing falls down). Dembski's counter to this example was to claim you can't have "delayed gratification," i.e. that in living systems, you would need the arch to work right away, and wouldn't possibly have a "mound" stage.

Dembski described his addition to Behe's "irreducible complexity" required to tighten it up: it's a system where removal of a single part destroys the original function, removal of multiple parts destroys the original function, the system comprises numerous and diverse parts, and (Dembski's innovation) the system is minimally complex for the required functionality. He calls this "Irreducible Complexity 2.0 (read two-point-oh), and claims this is the "Darwin-stopping invariant." In five to ten years, Dembski said, "Darwinism" will stand alongside failed theories like alchemy and perpetual motion. He said Darwin's unpaid debt is an unpayable debt, and that leaves only Design.


Date: 2002/12/06 00:40:46, Link
Author: ExYECer

ID does not have a problem with the idea that living things may have been designed through a long series of mutations (and whether this is true makes another interesting investigation). But it's the 'random' part that ID claims is demonstrably false.
Nor is Darwinism random either so I guess ID has nothing much against Darwinism? But the problem for ID is that it has to eliminate natural mechanisms in order to infer design. But if specifications can be generated for almost any hypothesis, not much would be left to chance or regularity (can we say false positives? The eye looks design just like a camera lense for instance?). ID is not helping us resolve much in understanding how these supposedly designs happened. In fact its failure to be able to distinguish between intervention and 'front loading' makes its approach not much different from methodological naturalism, other than that ID does not seem to be interested or able to propose its own hypotheses and thus seems to be limited to disproving hypotheses, just like good old science but more restricted. Mutations are still random 'with respect to function/benefit'. Its that simple. That mutations are not random in time, location or environment is hardly a surprise but so far mutations seem to remain random wrt their benefit of the organism in a given environment.
ID'ers make vague claims that science cannot explain 'X' and if history is a predictor of the future will continue to be shown wrong.

So far ID has failed to show how it can reliably identify rarefied design. In fact it has major problems with issues such as specification (almost anything can be specified and thus not much chance remain: can we say false positives?). CSI and the law of conservation of CSI seem to be failing to deliver. The law of conservation of CSI _in a closed system_ or more correctly since it is not really a conservation law despite the confusing use of terms by Dembski, CSI in a closed system can only decrease. Sound familiar? Entropy in a closed system can only increase... Of course in an open system CSI can increase just like entropy can decrease. Its that simple.

Perhaps Dembski and the ID movement could benefit from a more rigorous derivation of their claims (See Wolpert's comments for instance). But I foresee that any such effort will be the downfall of ID since the law of conservation of CSI will wither away as just a variant of the 2nd law. Specification will be shown to be either too subjective and all inclusive or impossible to specify in a mathematically sound format. The ID filter will be seen to not deal well with false positives, 'we don't know'.

As Miller stated so well, most scientists are not interested in the intelligent design claims for the simple reason that ID cannot really be helpful in scientific inquiry. Dembski himself has condemned ID to such a status of irrelevance with his latest posting.

Date: 2002/12/06 00:43:50, Link
Author: ExYECer
Arn Moderator 4 wrote:

I've deleted a couple of non-substantial posts already. I'm going to pay special attention to this thread. No short quip posts. Substance and respect, please, if you will.

Thus my to the point and substantial posting in response to Dembski. If such approaches are considered spamming and worthy of envoking rule 6 then perhaps I should break up the posting on lets say 6 smaller pieces? But that seems contradictory to the 3 postings per day rule. In effect ARN moderator 4's suggestion would enforce a large posting instead of multiple smaller ones.

Dr Dembski

I find it fascinating how you are hopping from 'No Free Lunch' theorems as a foundation of your argument to 'No free lunch principle'.


The Design Inference laid the groundwork. This book demonstrates the inadequacy of the Darwinian mechanism to generate specified complexity. Darwinists themselves have made possible such a refutation. By assimilating the Darwinian mechanism to evolutionary algorithms, they have invited a mathematical assessment of the power of the Darwinian mechanism to generate life's diversity. Such an assessment, begun with the No Free Lunch theorems of David Wolpert and William Macready (see section 4.6), will in this book be taken to its logical conclusion. The conclusion is that Darwinian mechanisms of any kind, whether in nature or in silico, are in principle incapable of generating specified complexity. Coupled with the growing evidence in cosmology and biology that nature is chock-full of specified complexity (cf. the fine-tuning of cosmological constants and the irreducible complexity of biochemical systems), this conclusion implies that naturalistic explanations are incomplete and that design constitutes a legitimate and fundamental mode of scientific explanation.

and several other references to the importance of the NFL theorems.

It is interesting to find out that one of the fundamental principles of your book has now been side tracked when it became clear that the NFL theorems are likely not relevant for evolutionary mechanisms.It still saddens me that you accuse people of 'smuggling in' CSI, especially Tom Schneider who has succesfully disproven most of your claims about Ev.
So now the question is reduced to, smuggling in of complex specified information. How it such information 'smuggled in'? Your limited applicable "Conservation of information" theorem, also known as the second law of thermodynamics shows that in a closed system indeed information can only decrease. But what about an open system, information is imparted on the system by making choices, whether it be intelligent design or some vetting algorithm like natural selection which transforms information about the environment into the genome. The reason why this works so well for DNA is because it has some very useful properties, it can store historical information, it can copy/duplicate such information. Your argument that 'No free lunch' theorems/principles show that information/entropy can only increase/decrease through intelligent design is begging the question indeed. What is the difference between intelligent design and natural design I ask you? In both cases, choices are made that lead to increased correlation between the genome and the environment, hence information is transfered from the environment into the genome. Furthermore your argument about front loading becomes meaningless, where is this front loading supposed to have taken place? It must have happened before the laws of physics came into existence and in any event your approaches do not even allow us to distinguish between front loaded and intervention design. As Murray has so very aptly argued, this makes intelligent design nothing different from methodological naturalism.

So what do we have so far

1. NFL theorems are not really that important anymore
2. NFL principles are now the 'hot topic' of course they lack even more in mathematical foundation despite Dembski's assertion The No Free Lunch Principle states that if you have some naturalistic process whose output exhibits specified complexity, then that process was front-loaded with specified complexity. as argued by Wolpert, one of the authors of the NFL theorems

I say Dembski "attempts to" turn this trick because despite his invoking the NFL theorems, his arguments are fatally informal and imprecise. Like monographs on any philosophical topic in the first category, Dembski's is written in jello. There simply is not enough that is firm in his text, not sufficient precision of formulation, to allow one to declare unambiguously `right' or `wrong' when reading through the argument. All one can do is squint, furrow one's brows, and then shrug.


Indeed, throughout there is a marked elision of the formal details of the biological processes under consideration. Perhaps the most glaring example of this is that neo-Darwinian evolution of ecosystems does not involve a set of genomes all searching the same, fixed fitness function, the situation considered by the NFL theorems. Rather it is a co-evolutionary process. Roughly speaking, as each genome changes from one generation to the next, it modifies the surfaces that the other genomes are searching. And recent results indicate that NFL results do not hold in co-evolution.
3. Conservation of information laws seem to be nothing different from the SLOT
4. Intelligent design cannot distinguish itself from methodological naturalism when it is unable to distinguish between front loading and intervention.
5. Specified complexity seems to be a subjective and meaningless concept in that one could easily specify any chance/regularity hypothesis, leading to countless false positives. May I point in this context to the very to the point analysis by Sobel


From this second illustration can be gathered that Dembski.s theory enables a moderately imaginative person,
with a list of possible delimitations of an event, easily eliminate relevant chance-hypotheses for the event; if they all
make more probable that not its non-occurrence, and avoiding .false negatives. concerning relevant chance-
hypotheses for this event is somewhat (it need not be very) important to him.  From the two illustrations, one may
gather that by the lights of Dembski.s book, we are entitled, and will always be entitled  to conclude, that not much
happens by chance.
As far as the flagellum is concerned Ken Miller has posted a prepublication  of an article that will appear in volume entitled "Debating Design: from Darwin to DNA," edited by Michael Ruse and William Dembski, which will be published by Cambridge University Press volume early in 2003. In another pre publication Miller addresses "Answering the Biochemical Argument from Design".

But what I find most telling is that the design inference has now retreated from trying to provide scientific contributions to a mere 'Intelligent design, in contrast to Darwinism, is not a theory about process but about creative innovation." It should not come as a surprise that intelligent design researchers have been less than succesful in finding funding for research into something that seems to be unable to contribute much to the scientific discussion.

Not surprisingly, Darwinism, which does propose real scientific pathways allowing us to extend our understanding of how life evolved, has a burden that indeed ID need not deal with, providing for hypotheses which can be disproven. Intelligent design, unable to even address if innovation occurs as an intervention or as some form of front loading has to deal with the fact that its foundations on NFL theorems, conservation of specified complexity and specification are falling apart fast. Dembski argues "The formal theory for specifications that I develop maps onto the biology unproblematically" but I have yet to see anything resembling such a theory. In fact the 'theory' seems to show that specification depends inherently on subjective interpretation and can in principle be extended to any hypothesis. (See also Sobel) The specification of the flagellum shows how meaningless 'specification' really is. It looks like an outboard engine. Well, the inner ear looks like a drum, can we now infer design for the ear? Snowflakes look like little sculptures once magnified enough. The sunset looks like an expressionistic painting. Need I say more?

To state that biology has remained empty handed in explaining biological complexity seems to show a tendency to ignore the known literature on these topics. But I doubt that Dembski is interested in discussing how ID fares compared to scientific inquiry into these topics, after all ID has no burden at all. Of course if Dembski applied his argument consistently he would have to argue that ID bears the burden of providing convincing evidence of design and its designers but ID is not interested in process and seems to be stuck detecting rarefied intelligent design using a faulty filter. See for instance the excellent article by Welsberry and Wilkins The advantages of theft over toil: the design inference and arguing from ignorance where they show how Dembski's filter fails to incorporate 'we don't know' and provides a priveleged and unwarranted position to the design inference.

That Dembski has abandoned much hope for a scientific breakthrough for ID seems obvious when he argues for political approaches instead. It seems that the 'Wedge strategy' is alive and well.. Of course Bruce Gordon seems to have realized how


"... the theory has been prematurely drawn into discussions of public science education where it has no business of appearning without broad recognition from the scientific community ... inclusion of design theory as part of the standard discourse of the scientific community, if it ever happens, will be the result of a long and difficult process of quallity research and publication. It also will be the result of overcoming the stigma that has become attached to design research because of the anti-evolutionary diatribes of some of its proponents on the one hand and its appropriation for the purpose of Christian apologetics on the other. ...If design theory is to make a contribution to science, it must be worth pursuing on the basis of its own merits, not as an exercise in Christian 'cultural renewal,' the weight of which it cannot bear.... In conclusion, it is crucial to note that design theory is at best a supplementary consideration introduced alongside (or perhaps onto by way of modification) neo-Darwinian biology and self-organizational complexity theory. It does not mandate the replacement of these highly fruitful research paradigms, and to suggest that it does is just so much overblown, unwarranted, and ideologically driven rhetoric." Bruce Gordon, ex-CRSC Fellow, Science and Theology 2:1 (2001), p. 9
See also Here

Perhaps Dembski may help us understand where he believes ID should be going, other than following the inevitable political route now that the bridges to a scientific route seems to have been  burned effectively by Dembski's latest admissions of what ID is and isn't. I had some hopes that ID would provide for a positive research program that would expand our understandings but that does not seem to be a burden ID is willing or able to carry.

I am also interested in why, if Dembski believes that the rules of engagement are fixed in favor of evolutionists, he did not invite Wesley Elsberry for instance or Richard Wein to present their case at the last RAPID meeting? Seems that ID has only itself to blame here.

[PS: I will be addressing some historical revisionism of Behe's IC "This becomes immediately evident from reading Behe since in his definition of irreducible complexity, the function of the system in question always stays put." and many other interesting issues raised by Dembski. ]

And some reference about evolution and biological complexity

ev: Evolution of Biological Information

Evolution of Biological Complexity

Genomic Complexity in Micro Organisms and Digital Organisms

Some Techniques for the Measurement of Complexity in Tierra

The Evolution of Complexity and the Value of Variability


The hypothesis that environmental variability promotes the evolution of organism complexity is explored and illustrated, in two contexts. A coevolutionary `Iterated Prisoner's Dilemma' (IPD)
ecology, populated by strategies determined by variable length genotypes, provides a quantitative demonstration, and an example from evolutionary robotics (ER) provides a more qualitative and naturalistic exploration.
In the ER example, the above hypothesis
is illustrated in real environments, and the organism complexity is seen in robots exhibiting relatively complex behaviours and neural dynamics.
Implications are drawn for the emergence of complexity in general, and also for artificial evolution as a design methodology.
Complexity and Self-Organization

What is complexity


The physical complexity of a sequence refers to the amount of information that is stored in that sequence about a particular environment. For a genome, this environment is the one in which it replicates and in which its host lives, a concept roughly equivalent to what we call a niche.

Information is a statistical form of correlation, and thus requires, mathematically and intuitively, a reference to the system that the information is about.

As we saw above, information is revealed, in an ensemble of adapted sequences, as those symbols that are conserved (fixed) under mutational pressure. Imagine then that a beneficial mutation occurs at a variable position. If the selective advantage that it bestows on the organism is sufficient to fix the mutation within the population,(24) the amount of information (and hence the complexity) has increased. A beneficial mutation that is lost before fixation does not decrease the amount of information, nor does this happen if a neutral mutation drifts to fixation.
In this paper Adami clarifies many of the concepts relevant to complexity such as information, entropy etc.

Date: 2002/12/07 13:22:54, Link
Author: ExYECer
Originally posted by mturner:
Originally posted by Ex_YEC_er:
If that is the case then intelligence and volition cannot be intelligent design since ID is infered through the elimination of chance and regularity.

So what is it ?

Wrong as always. When are you going to accept that ID is about no such thing.

As I said before you seem to be unfamiliar with the major proponents' arguments in the ID world. Dembski's design inference argues, incorrectly I might add, that rarefied design can be infered through the elimination of chance/regularity hypotheses. In fact if such a hypothesis cannot be rejected then ID has to be rejected.


For ID, volition and intelligence are regularity, and evolution is the product of this regularity, not the product of chance, as you Darwinists would have us believe.

Why is it that opponents of Darwinism so often seem to fail to understand that chance is but ONE aspect of the mechanism. In fact it it Dembski who seems to be arguing that Intelligent Design cannot be captured in regularity or if it can then it is not design anymore. If Mturner wants to argue against Dembski that's fine with me, join the fast growing club of those who object on scientific reasons to the Dembski design inference.


ID does not deny the existance of random causation, the way you deny the existence of intelligent causation.

Shameless misrpresentation of my arguments I would argue. I in fact accept intelligent causation in nature. If you want to argue, please at least present my arguments correctly or run the risk of being called on spinning strawmen.


Randomness and regularity  both exist, there is no effort to "eliminate" them, but regularity (intelligence/volition) acting in direct response to chaotic, random  environmental change (chance) is what brings about organismic adaptation and evolution; not Darwinism's random, accidental mutation supposedly being "regulated" by chaotic, random, accidental, environmental pressures. Absurd.

Welcome aboard opposing the ID inference ala Dembski/Behe then. Of course you seem to be somewhat unfamiliar with the supporting evidence and lacking in supporting evidence for your thesis but I will ignore that for the moment since I am thrilled to have you oppose Dembski's design inference.


And so that is what ID is really about, and what Darwinism is really about, whether you like it or not, and whether or not you can ever bring yourself to face that, for you, painful truth.

So far it seems that mturner is having some problems facing the truths. I will be kind and gentle when introducing him/her to the facts of life.

Welcome aboard the anti-Dembski-ID train though. It is comforting to see that even pro-Intelligent design people are realizing that Dembski's arguments are just not useful in infering rarefied intelligent design. Of course without Dembski's attempt to bring ID into science, one may wonder what will happen to ID as a scientific movement? That presumes of course that such a movement ever existed in any serious manner.  :D

Date: 2002/12/07 13:42:54, Link
Author: ExYECer
In response to my posting of my reply to Dembski ARN moderator 4 now has found another objection :-)

I still don't like your tone. It "saddens" you, that Dembski "accuses"? That sucks, and is disrespectful.

He therefor has banned me from further participation in the thread. I would like to extend my appreciations to ARN moderator 4 for all his good work ;)

Date: 2002/12/10 23:46:58, Link
Author: ExYECer
Moderator 4 (aka Jack F) has decided to ban me for two weeks for posting the following message on an off topic board at ARN


I'll lose interest in ARN soon, but have followed some posts that I was interested in before I decided to quit. But I happened to see this post by Mike B.

I was threatened with removal after two postings, one which asked why YXCs post was spam and one that simple said I agreed with Douglas, XYCs post should stay because it brought up interesting points for discussion on Dembski's overly repetitive arguments. I then suggested a new direction for the thread to focus of Dembski's limiting the definintion of Direct evolution and IC.

Tell Mike B, I formally left ARN (had my membership removed) because I could ignore a lot from Jazz, but not this hair trigger threat to ban me simply because I was in agreement with another poster. In none of my posts did I call call Jazz out as a horrible moderator, then everything I post there after gets deleted.

Mike B, know what you are talking about before you criticize another poster for being disgruntled. I did not highjack a single thread (ala DNAunion and his obsession with Julie/Wolf), nor did I make joking threats (ala CML). Or post little sidetracking quips (ala Jazz himself on
several posts).


Jack responded that


I just deleted a post from XYC. He posts a note from RB slamming the site.

Funny how Moderator 4 seems to consider RB's response to be 'slamming the site'.

Date: 2002/12/20 11:51:39, Link
Author: ExYECer
I find it fascinating how Wells, convinced of his own arguments, refuses to consider anything that shows that his portrayals of the 'Icons' left much to be considered.

I am glad that Wells is on the side of the ID movement.

Date: 2002/12/27 11:39:22, Link
Author: ExYECer
Cornelius states that


Unfortunately, evolution has always relied on #1 to establish itself as a scientific fact.
It seems that Cornelius may be unfamiliar with the scientific evidence supporting evolution if he believes that evolution has ALWAYS relied on #1 to establish itself as a scientific fact.

Reality will show that it is #3 and #2 which are the methods through which science has established evolution as a viable theory. I am somewhat surprised to see Cornelius make the statement and others such as    
To me it is clearly flawed, and it is little wonder that evolutionists dwell so much on #1. It clearly is the motivation for the theory.
Which suggests to me that he has not really looked at the scientific evidence supporting evolution. It may be helpful if Cornelius could help us understand how he reaches conclusions like the ones above or    
I'm merely claiming that the scientific evidence points away from this.
What scientific evidence points away from naturalistic pathways?

If Cornelius really believes that God and thus ID fails if we can show that naturalistic pathways are sufficient or that natural pathways explain the preponderance of evidence then for all practical purposes we can consider ID to be refuted. ID by itself according to Cornelius' definition seems to be providing us with nothing more to understand the world around us, it merely makes claims based on a religious motivation without attempting to provide for a better explanation than that provided by scientific inquiry. And for good reasons since I do not believe that ID in this format can do much to compete with science.
ID seems to require that we ignore the vaste amounts of data that support #2 and #3 while focusing on the strawman of #1. If Cornelius were serious about the statement that    
ID is using all our knowledge to identify evolution as flawed.
then he would not have focused on making claims that evolution focuses on #1. Hundreds, thousands of papers on evolutionary mechanisms and theory would put significant doubt on the validity or even supportability of Cornelius' claims.

Cornelius also states that

We had a good and fruitful discussion. My hunch that evolution is quite flexible vis-à-vis these phylogenetic results, if anything, were corroborated
seems to ignore the strengths of the phylogenetic results vis-a-vis common descent and seems to focus on the fact that scientific theory can adapt to our increasing knowledge. So far the arguments seem to not really focus on scientific arguments but rather on hand waving, strawmen arguments while ignoring the vaste amounts of evidences supporting the fact of evolution.

In a previous posting Cornelius confused my comments about nested hierarchies with correlated characteristics. Nested hierarchies are als correlated but correlated characteristics need not be hierarchical.

Finally Cornelius wondered why I made the following statement

We should be careful not to mix our faith and science, since both will suffer. -- Francis

Cornelius states that:

"This obviously does not derive from science nor the Scriptures, so I'm not sure why you say this."

If science has to give way to our theological thinking then both science and theology will suffer. Of course science and theology can live together in their own realm but when it gets misapplied like for instance found in many YEC approaches, it becomes a destructive force to science and religion. As an ex-YEC-er I have seen much of this.

In Christ

Date: 2003/01/05 15:12:06, Link
Author: ExYECer
I have been using Sober's paper and others to argue against Dembski on ISCID. For those interested, the paper can be found Online

More interesting papers by Sober

Dembski, not surprisingly, comments on Sober and on the use of Bayesian statistics

More comments by Dembski

Detecting Design? A First Response to Elliott Sober William A. Dembski

Fitelson's page

Sobel on Modus tollens


20William Dembski would eliminate, in the light of Fine-Tuning, the particular chance-hypothesis Chance, not
merely by the small probability for Fine-Tuning conditional on Chance, but by that together with the fact that Fine-Tuning is ‘specified’. I explain in (Sobel forthcoming-b) why I recommend that his way of eliminating chancehypotheses, whatever exactly that way is, not be tried at home.

Review by Sobel of The Design Inference: Eliminating Chance Through Small Probabilities,

A devastating review imho although a bit 'telegraph style'

Date: 2003/01/11 16:56:49, Link
Author: ExYECer
I started a separate thread to keep the original thread about survival of the least appetizing on track per suggestion of Irving.
Irving suggested that Noel was looking for a real demonstration but I believe the issue was

Those who want Darwinism to be true must demonstrate that random incremental change and selection can increase information.

And ev and others have shown that in principle mutation and natural selection are sufficient to increase information in the genome. Experiments in real life are more complicated but SELEX experiments are the next step to show that in 'real life' mutations and selection can be shown to increase information in the genome.

Hi Noel,

you state


Those who want Darwinism to be true must demonstrate that random incremental change and selection can increase information. Should they be able to do this, which is doubtful, it still does not follow that everyone would opt for their explanation.
That is a very simple demonstration and various experiments and simulations have shown that this is indeed the case
ev: Evolution of Biological Information


The ev model quantitatively addresses the question of how life gains information, a valid issue recently raised by creationists [32] (Truman, R. (1999), but only qualitatively addressed by biologists [33]. The mathematical form of uncertainty and entropy implies that neither can be negative, but a decrease in uncertainty or entropy can correspond to information gain, as measured here by Rsequenceand Rfrequency. The ev model shows explicitly how this information gain comes about from mutation and selection, without any other external influence, thereby completely answering the creationists
Evolution of Biological Complexity


In order to make a case for or against a trend in the evolution of complexity in biological evolution, complexity needs to be both rigorously defined and measurable. A recent information-theoretic (but intuitively evident) definition identifies genomic complexity with the amount of information a sequence stores about its environment. We investigate the evolution of genomic complexity in populations of digital organisms and monitor in detail the evolutionary transitions that increase complexity. We show that because natural selection forces genomes to behave as a natural ``Maxwell Demon'', within a fixed environment genomic complexity is forced to increase.
In fact as I have argued elsewhere, the fourth law of thermodynamics as proposed by Dembski is imho nothing more than a reformulation of the second law of thermodynamics for a closed system. When realizing that in open systems, entropy can decrease/information can increase and that in evolution it is the environment which infuses information into the genome through selection, and we realize that information increase is not that hard to realize.


Others have comment on this
Victor J. Stenger and Here

found via this link

Adrian L. Melott

Common objections are that ev somehow smuggled in information and although noone really seems to have shown that there is any smuggling done, increase of information of course requires 'smuggling in information' just as selection 'smuggles in' information from the environment into the genome.
I would be interested in any evidence that the information was pre-coded.

Date: 2003/01/12 15:41:31, Link
Author: ExYECer

I notice that you still have not identified how and where Tom Schneider used teleology to gain a specific result. In fact you seem to confuse the hypothesis formulated by Tom with the experiment to support or disprove the hypothesis. His hypothesis was that random mutations and selection could explain the observed Rfreq. But did he code this into his program? Of course not.

That no matter what initial condition this goal is always reached is an example of robustness but was this goal programmed into the program? Roger has failed so far to show that this is indeed the case. In fact if Roger had read the pages I referenced he would have found that different initial conditions and runs lead to different outcomes.

The ev program was run repeatedly to 2000 generations starting with 100 different random seeds. The lowest observed final information content was 2.3 bits and the highest was 5.2 bits, with a mean of 3.8 0.5 bits. Duplicate runs occured 7% of the time. These duplicates do not affect any conclusions, but they do suggest that the random number generator is not the best. Despite this, the program invariably gave a significant information increase. From the observed values, we can determine that the probability of a return to zero information is 1.5 x 10-14 (7.6 standard deviations).

Fitness was not defined as a future goal, if you had read Ev's manual and the accompagnying papers you would have realized that the fitness function is dependent only on time t. Your claim that teleology or an intelligent selector is being used fails to show that 1. teleology is actually been used, in fact the opposite seems to be the case 2. that intelligence is a requirement for the selector.

As far as Rossum's claims, he seems to be correct. Dembski does allow algorithms to generate information and Dembski does allow that EA's can 'transfer' CSI. But similarly to EA's being probability amplifiers, human intelligences functions as probability amplifiers as well. Dembski's claim that in a Closed system CSI can only decrease seems to be correct after all his 4th law seems to be nothing more than a reformulation of the 2nd law of thermodynamics. But once he allows the system to be open such as through an intelligent designer OR through a natural designer, complex specified information can increase in both cases.
What part of Rossum's response do you doubt?

So far you have failed to show where in the program Schneider used teleology. Your argument so far confuses hypothesis with the program, suggests that contrary to fact the fitness is with respect to future and seems unaware of the fact that the outcome of the experiment is NOT the same always.

I suggest that you read the relevant papers and pages that deal with Schneider's program before you accuse him of something you cannot even support.

Date: 2003/01/12 17:36:46, Link
Author: ExYECer
From Here

Mike seems to have avoided addressing the real issues and instead focused on some minor issues. First of all Mike objects to me stating that his approach and Poole's approach led to the same conclusion. Mike correctly points out that I confused Poole's paper with the papers in which the link between cytosine deamination and increase hydrophobicity was made. While I thank Mike for correcting my minor error he seems to have ignored the real issue namely that he used methodological naturalism to explain the tendency of cytosine deamination to incrase hydrophobicity. In fact from the moment he defined the instance of 'front loading' his approach is indistinguishable from methodological naturalism. Neiter Mike nor others may have explained or shown how cytosine became incorporated but both work from the assumption that it was. Mike suggests that his approach allowed him to address the claim that 'an engineer would have replaced cytosine' but nothing in his approach supports this argument. All he has shown is that natural processes seem to have led to cytosome deamination and a corresponding increase in hydrophobicity. No effort was made to show that an engineer would or would not have used cytosine or would or would not have replaced cytosine. In fact Mike has made no effort to show any link between his findings and the idea of front loading. Looking back in time and then saying that it must have been front loaded because it seems to have been selected for is begging the question. Mike complains the evolutionary approach claims that it must have been chance/evolution but that is not very different from 'it must have been design'. Both are assumptions which would require some supporting evidence. The fact that cytosine deamination leads to increased hydrophobicity is no evidence of the premise that 'chance did it' nor 'an intelligent designer did it'. Mike wants to argue that different perspectives give different approaches but I'd argue that these approaches are not distinguishable from methodological naturalism. No teleology is required to explain what happened t>t_0 and no evidence of the need of teleology at t=t_0 has been provided.

Mike does suggest that he provided a description with 'purpose' but that seems to be like painting the bulls eye around the arrow, to use a common metaphore. Mike suggests that his approach allowed him to pre-specify that deamination of cytosine increased hydrophobicity but that seems hard to imagine. The concept of evolution would also predict that if cytosine deamination were an important contributor to the increase of hydrophobicity and that if such increase would be increasing fitness that cytosine deamination events would be common.

It has been argued that Mike's approach canm not been distinguished from methodological naturalism approach and that the findings do not help us answer the question of the presence of cytosine at instance t_0, the moment of front loading. Thus either chance/necessity or intelligent design may have been responsible for what happened at time t_0 but after time t_0 it was all methodological naturalism and not intelligent design.

Mike suggests that since cytosine inclusion was not a frozen accident his premise may be preferable. But lets point out that Mike does not explain anything about the 'frozen design incident' thus leaving it for all practical purposes indistinguishable from a 'frozen accident'. Additionally Mike may have created a strawman of 'frozen accident' when the actual mechanisms may have been a combination of availability and selective advantages. That an engineer may exploit the same pathways that evolution may exploit also does not help us address the issue of front loading. Front loading/origins are separate from the evolution t>t_0. Mike however has not provided any evidence that the event at t=t_0 involved front loading. Mike did initially suggest that there was some problem with cytosine formation in prebiotic world which would have been a way to eliminate chance/regularity as a possibility and thus strengthening a design inference but as I have pointed out our knowledge has increased and potential and realistic pathways may have been identified.

Mike seems to agree that from t=t0 forward evolution did play a role, so now the question is if t=t_0 requires an intelligent designer or preferably involved an intelligent designer. So far no evidence has been provided that this is the case. If Mike wants to limit his claims to just refuting Poole then his efforts may have helped towards this goal but then the issue of front loading seems irrelevant. But I would say that Mike has not shown how the engineer is in any way limited in what he/she would or would not do.

Mike still seems to be confused when insisting that I claim that design has to be supernatural. I am pointing out that t>t_0 does not help resolving the intelligent design claim and that t=t_0 has not been addressed. You suggest that at t=t_0 some event took place without really defining the moment t=t_0, the circumstances of the event, the goals of the event. Mike merely claims that at t=t_0 there was an initial state namely cytosine was present as one of the four bases in DNA. Unless Mike wants to argue that at t=t_0 a supernatural design event took place, he has no reason to suggest that I am requiring a supernatural design.

Mike now suggests that the existence of a 'sophisticated, universal genetic code' is positive evidence of a design event but Mike once again fails to show this to be the case. In fact no positive evidence of such a design event has been provide, merely claimed. But that sounds like a 'front loading of the gaps' explanation. Since Mike seems to agree that from this moment forward everything was fully explainable in natural terms, and no need for intelligent design was required he basically has used the data which supports common descent to argue for 'common design' without really explaining anything about this 'design'. Countless papers and hours of research have been involved in providing for and actually finding plausible pathways to explain the origins of the genetic code and none so far seem to have found any evidence of this intelligent design event, in fact what may have been a good point to place t_0 seems to have been pushed back over time from the Cambrian to the prebiotic RNA world.
Mike wants to know what data would cause me to suspect that evolution was front loaded. This would require the following 1. Mike needs to define what the purpose or goal(s) are of the intelligent agent 2. Mike has to show that given the chaotic and unpredictable nature of the world around us, this goal can be reliably reached 3. it can be shown that natural processes without intelligent design could not have achieved the state at t=t_0 4. it can be shown that there was indeed an intelligent agent present at t=t_0.

Mike objects to my scenarios of how cytosine may have become part of the genetic code as irrelevant but they are very relevant in understanding what happened at t=t_0. Mike does not seem to have any evidence of front loading at t=t_0 to required intelligent agent. Thus it is very relevant to show that the front loading at t=t_0 can be explained from a naturalistic perspective without the need for ID. At t>t_0 Mike and I seem to agree that it was purely natural processes at work without the need for intelligent design.

As far as Nic's analogy, Mike may have responded but that may be far from having addressed Nic's observations. In fact I would argue that you did not even address Nic's claims.

Date: 2003/01/12 20:28:32, Link
Author: ExYECer
Dear Nelson.

It seems clear to me that you are ignoring the details provided by me about possible pathways for evolution.

If you claim that front loading explains Pecten then you accept that evolution can generate the eye as found in Pecten unless you are not talking really about front loading but intervention.

Perhaps you can first share with us your front loading hypothesis wrt for instance Pecten and the Lobster? In fact you suggested that Pax-6 gene 'supports front loading' but if that is the case your argument seems to be that given the existence of Pax-6 at an instant t=t_0 you expect evolution to lead to the large variety of eye 'designs' as found in nature. Is that correct or are you backtracking your claims that let's say every species/family/genus/ or at whatever level was 'front loaded' independently at different instances in time/space?

I doubt that one can make a logically consistent claim of front loading that is not contradicted by the data other than through the Pax-6 gene. But Pax-6 seems to be going back in time quite some distance, long before the family Pectinidae arose.
To me it seems that you are not arguing for front loading but rather intervention since your objections seem to be terribly ad hoc and seem to refuse to recognize natural pathways to these structures. If that is the case then you cannot be arguing for front loading since teleological front loading is defined to be at t=t_0 the necessary information was inserted so that at a given time t1, with t1>t0, a certain feature arises in a certain family/species. Non teleological front loading would be that at a certain instance t=t_0 an initial state exists and we can trace back to such an initial state showing how the various eye 'designs' all seem to trace back to ancestral forms.

As far as the references to Korthof et al, they are meant to help the interested reader understand many of the problems found in Denton's work.

It seems evident to me that Nelson has not familiarized himself with the papers he quotes but rather that he is relying on second hand information which may or may not be relevant or even accurate. As I have shown, Dakin's 1908 statements are explained in more detail in 1967 and onwards where it was shown that the Pecten eye is very likely an evolutionary  continuation of the single retina eye with the addition of a reflecting layer. The transition is even better to understand from a selective evolutionary viewpoint when realizing the advantage of these changes namely the ability to see both in and outside water. Combine this with the fact that the Pecten resides in a tidal affected area and thus may be exposed to both water and air and one realizes the selective advantage of the Pecten eye. Thus we have found the answer to Dakin's uncertainties. Nelson complains that I do not provide sufficient detail how natural selection and mutation built these eyes but if Nelson were to argue for front loading he would have no choice but to accept that natural forces can lead to the Pecten eye or Nelson should drop his claims about Pecten and front loading. Surely our ignorance of certain details should not be taken as evidence for front loading. In fact although we have not yet obtained all the necessary evidence a plausible pathway has been provided. Nelson may be complaining about 'sufficient details' but the amount of detail so far already exceeds any alternative hypothesis. And since Nelson seems to want to argue front loading he also by default has to accept some time period in which evolutionary processes shaped the eye of Pecten to what it is right now.

Nelson then raises the spectre of Spondylus, which attaches to rocks as if this forms a problem. Until Nelson can show us from the original research papers what the eye of the Spondylus looks like as compared to Pecten we have no real way to discuss this. Secondly until Nelson shows that there is actually a problem explaining the evolution of the eye in Spondylus and Pecten, we merely can speculate about what Nelson's 'argument may be'. Since Nelson seems to accept the evolutionary history of Pecten and Spondylus one may wonder why he seems to oppose that evolutionary processes led to the eye 'designs' since he does seem to accept front loading and common descent. Perhaps Nelson believes that another mechanism than evolutionary mechanisms played a role? He mentions front loading but as I have shown that merely states that at a given stage in time t=t_0 information was injected into the genome to allow Pecten and Spondylus to form their respective eyes. The fact the pectinacea were ancestral to Spondylidae surely supports the evolutionary pathway. So it is not clear to me how Nelson suggests front loading could have helped Pecten and Spondylidae. In fact, if Nelson is correct about the location of the Spondylus and its eyes (so far the data seem to be vague on either aspect) then Nelson may have to explain why a front loader would lead to a system which is now defunct namely the ability to see in air.

Nelson then confuses the issue of front loading and intervention even further when he states
If these biological features were poised to evolve into greater complexity through an intelligent agent then it wouldn't have been as difficult as a blind force tinkering with such a complex system.

Is Nelson suggesting that evolution is guided through an intelligent agent. Then he should not be arguing for front loading but instead for intervention.

Nelson still seems to be unwilling to deal with the available evidence which includes intermediate stages for the varieties of simple and compound eyes. Perhaps Nelson wants to argue that the details are not sufficient but that's just a matter of time for science to find all the common genes and variations that have led to the variety of eyes as found in nature. So far the evidence strongly suggests both evolutionary mechanisms and at least for many basic components a common ancestor.

If Nelson had taken the time to look at the pictures then he would have noticed how these portray the variety of intermediate paths likely to have been taken in the evolution of the various eye forms.

Nelson still repeats his so far unsupported assertion that
Again, none of this shows how blind natural forces would, nor does it even explain why, natural selection would guide the organism down the difficult road of refraction to reflection in my particular examples

1. Could Nelson show that the road of refraction to reflection is difficult
2. Could Nelson show that the road of refraction to reflection is even relevant for the lobster?

Nelson confuses the situation even further by claiming that


However, what I would expect from a Front-Loading persepective is that every step of the way was every bit more complex then the last, however, through the help of pre-positioned elements the evolution of these eyes was directed through intelligent agency.

So is it front loading or is it intervention? If it is front loading then we have the situation that at a certain time the information needed for evolution to play out was injected into the genome of a common ancestor and that from this common ancestor all the descendants arose with the large variety of eyes. Ignoring for the moment the grasping at straw nature of such a front loading scenario which would have to play out through an inherent chaotic and thus unpredictable system and interactions to eventually lead to the eye of the Pecten. Nelson presents no more evidence than that a some moment the basic building blocks were present that eventually would allow the Pecten or any other organism to evolve an eye design. No effort is made by Nelson to show that the eyes of the lobster are optimal for the functioning of the lobster. In fact Nelson merely argues that for the lobster eye, the eyes are perfect squares that are fine-tuned for the vision of lobster. No further information is presented to support this case. And if Nelson wants to consider fine-tuning and continue to argue for front loading then Nelson de facto has accepted the fine tuning power of evolutionary processes.

Given the contradictory stance of Nelson on the issue of front loading I would encourage Nelson to address the following issues.

1. Explain the hypothesis of front loading as it applies to Pecten.
2. If Nelson accepts front loading then does Nelson accepts that natural processes are responsible for the shaping of the eyes of Pecten? In absence of such an acceptance, Nelson cannot be talking about front loading here.
3. Can Nelson explain in detail the similarities and differences between Pecten and Spondylus and can Nelson provide us with the arguments proposed by Dakin? Do the findings apply to the whole family of Spondylus or just some particular species? After all the various species of Spondylus do seem to occupy a large variety of ecological niches
4. Can Nelson show that the lobster eyes are perfect squares or is Nelson using stylized drawings to reach these conclusions?

Perhaps Nelson may want to explain why the squares in the following picture are all but perfect?

Perhaps Nelson was confused by the resulting drawings?

Perhaps Nelson can also appreciate what perfect squares really would look like?

Date: 2003/01/12 20:54:06, Link
Author: ExYECer
First of all Mike objects to me stating that his approach and Poole's approach led to the same conclusion.
Mike correctly points out that I confused Poole's paper with the papers in which the link between cytosine deamination and increase hydrophobicity was made. While I thank Mike for correcting my minor error he seems to have ignored the real issue namely that he used methodological naturalism to explain the tendency of cytosine deamination to incrase hydrophobicity.

In my opinion, this was a major error on your part. Poole's conclusion was:

You are confusing two different issues. Mike complained that I had stated that Poole reached the same conclusion when it was in fact other researchers. Mike set off to address the claim that 'an engineer would have replace cytosine'. But he fails to show why an engineer would really do this. In fact he accepts that from the moment t=t_0, the moment of front loading, evolutionary pathways played their role. So the only difference between Mike and others is that he tries to explain the initial condition through appeal to design when others would explain the initial condition through appeal to chance/regularity. Since Mike has not taken any effort to try to show why the explanation through design explains the initial situation better than non-design and since science has already found likely pathways to explain the inclusion of cytosine in the genetic code, one may argue that not only has Mike failed to show that a designer were required but he has not even shown that an engineer would or would not have replaced cytosine. All Mike does is to show that evolution used the opportunity of cytosine, nothing more. Mike has not even shown that there was any predisposition to be expected which would require an engineer. None of the studies quoted by Mike support a front-loading scenario that would require an engineer.

In fact from the moment he defined the instance of 'front loading' his approach is indistinguishable from methodological naturalism. Neiter Mike nor others may have explained or shown how cytosine became incorporated but both work from the assumption that it was.

The non-teleologist would say that it was the result of evolutionary tinkering, whereas a teleologist would say it was the deliberate result of intelligent design. This clearly seperates the teleological explanation from the non-teleological explanation.

Nope, both the teleologist and non teleologist would have the same findings based on different presumptions. At instant t=t_0 cytosine was included in the genetic code. From there on evolution took over. Neither Mike nor others have shown how cytosine becoame incorportated both work from the assumption that it was. There is no additional benefit to the teleological presumption, clearly since both Mike and others have reached the same conclusion namely that cytosine deamination leads to primarily more hydrophobic aminoacids which may be helpful for stability. But stability may exactly be what is selected for so unless Mike can show that there is no natural pathway to Cytosine be included, Mike's argument has not explained anything that would one lead to conclude intelligent design. In fact one may argue that all Mike has shown is front loading, an initial condition without explaining if it was intelligent design or non-intelligent design which led to the initial condition.

Mike suggests that his approach allowed him to address the claim that 'an engineer would have replaced cytosine' but nothing in his approach supports this argument.

Actually he has. The major points in his essay is the following:


Since hydrophobic interactions play a large role in protein folding and structure, the effects illustrated in figure 3 suggest C-T transitions may play a significant role in protein evolution. What's more, figures 5 and 6 suggest C-T transitions may also result in both alpha helix and beta sheet elongation/formation. This raises the intriguing possibility that the genetic code was not only designed to minimize deleterious mutations, but that this design objective was balanced against a seemingly contrary objective to evolve new proteins through what I will call the Increasing Hydrophobicity Effect (IHE). This might also explain why serine, rather than proline, is included in the amino acid pool generated by C-T transitions (figure 4 and the sole exception from figure 3). Serine is present as a consequence of losing proline. This substitution removes a strong helix breaker and replaces it with a residue that is indifferent to helix formation. And proline is not added to the mix so as to not increase the chances that an existing helix is broken.

Nothing shows that an engineer would have replaced cytosine for this reason. Mike merely is painting a bulls eye around the arrow to explain why cytosine is present. Nothing is shown to indicate that the front loading requires an engineer and no effort is made to even propose the details of how or why the engineer took these steps.

This point alone has profound implications for protein evolution.

So we do accept protein evolution after all?

These ideas stem from the hypothesis that is the exact opposite of Poole's, namely, that a designer  would indeed incorporate cytosine despite it's predisposition for deamination.

Front loading either teleological or non-teleological leads to the same facts. Mike wants to argue that an engineer would be required and in fact that this is what an engineer would do without showing much evidence to support this. Mike is in fact painting a bulls-eye around the arrow.

Looking back in time and then saying that it must have been front loaded because it seems to have been selected for is begging the question.

As evidenced from the above, Mike never said that it seems to have been front-loaded because it "seems to have been selected for". The counter-intuitive inclusion of cytosine, despite it's predisposition for deamination, lead him to study exactly what effects this phenonemon has, with an ID prediction in hand. If this is truly the result of intelligent design then he should find some utility for including cytosine. He found a working hypothesis that has implications for protein evolution, namely that cytosine's predisposition is used to almost "force" proteins to evolve.

Counter intuitive from a design perspective not necessarily from an evolutionary perspective. Others have taken the evolutionary perspective and reached the same conclusions that Mike has reached, cytosine deamination leads to primarily more hydrophobic 'codons'. Hydrophobic codons tend to be more stable. Nothing requires the addition of 'an engineer'

The fact that cytosine deamination leads to increased hydrophobicity is no evidence of the premise that 'chance did it' nor 'an intelligent designer did it'.

Poole used the fact that cytosine is predisposed to deamination to say that this is the result of evolutionary tinkering. And this would follow, since evolution has no foresight, it is no surprise that such inefficient design should be found in the genetic code. However, Mike shows C-T transition leads to increased hydrophobicity. This has utility, since the hydrophobicity is the dominant force in stabilizing a folded structure.

Could you show where pool made this claim? Showing utility is hardly sufficient to show evidence against evolution and for design.

The concept of evolution would also predict that if cytosine deamination were an important contributor to the increase of hydrophobicity and that if such increase would be increasing fitness that cytosine deamination events would be common.

Now this is what I call, painting the bullseye around the arrow. Before Mike wrote about this, no non-teleologist wrote that evolution indeed included cytosine because it would increase fitness. That sounds more teleological then non-teleolgical, evolution has no foresight and therefore, cannot include cytosine because of it's predisposition for deamination. However, an intelligent design can.

It should be helpful if nelson were to familiarize himself with the arguments and available research before making such strawmen claims. I am not saying that cytosine was included because it had foresight I stated that if cytosine were included that evolution explains the rest of the story, as does Mike. The only difference is the reason for inclusion of cytosine. Mike argues that it was done by an intelligent designer, I argue that it was because it has been shown that cytosine, while not optimal, may have been the only available chemical. As Joyce has shown cytosine does seem to increase the fitness despite the impact of cytosine deamination.

Mike now suggests that the existence of a 'sophisticated, universal genetic code' is positive evidence of a design event but Mike once again fails to show this to be the case.

Mike has built the case for this in this essay here:

Not really. He merely paints a bullseye around a target while ignoring the available evidence. Mike makes no effort to explain why the front loading has to be intelligent

At most Mike argues that


One way to distinguish an intelligent designer over natural selection is that the former has foresight, while the latter is myopic, working only on immediate benefits. While the argument is fuzzy, it would seem that the error correction capabilities, inherent in the DNA chemistry (and perhaps the genetic code) appear to reflect foresight, when such capabilities would become essential in the high-information state life forms that would exist hundreds of millions of years after the putative simple replicators.

But appearance is not evidence. But I find his parting comment to be ironic since he also argues that early in the origins of life, cytosine was added to help increase the mutation rate of DNA.

One thing seems clear. Very early on, life became obsessed with error correction. The chemistry of DNA/RNA, the Genetic Code, and the proof-reading mechanisms behind information transfer are all biological universals. Apparently, one of the first "objectives" of evolution was to put a layer of constraints on evolution.

What is it? Does a designer place restrictions on evolution or actually losens restrictions on evolution. Or are we perhaps talking about two uninformed designers? Or perhaps more?

Date: 2003/01/12 23:59:35, Link
Author: ExYECer
JxD: I am confused by the nature of this dialogue.  Why is Nelson here in place of Mike Gene to defend, uh, Mike Gene's thesis?  I am extremely disappointed by Nelson's sweeping claims in his essays, which come often without much support except for arguments from authority (i.e. Mike Gene).  Let us consider a few of them:

I am also disappointed with Nelson's sweeping claims, this is not the only thread in which Nelson makes sweeping claims and rejects evidence for evolution out of hand. The reason I posted the thread here is as an archive for material posted at ISCID since the moderator has been known to delete my postings.

Date: 2003/01/13 11:33:59, Link
Author: ExYECer

You seem to be confusing me pointing out that Mike's approach is nothing different from what science would do namely methodological naturalism to show that for instance cytosine deamination leads to preferentially hydrophobic coding codons. I am not 'defending. methodological naturalism, since it quite obviously needs no defending.

If Mike wants to argue that his argument of 'front loading' gives some utility then I agree but in the end both the presumption of a utility preloading or a natural preloading leads to the same finding about cytosine deamination.

But Mike went beyond the claim of utility to state that he set out to show why an engineer would use cytosine. His argument however seems to amount to painting the bullseye around the arrow.

I am more than happy to accept that there was a different starting point between Mike and scientific researchers into cytosine deamination but I am merely pointing out that the starting point does not seem to make much difference to the final conclusion.

His framework is indeed completely naturalistic for t>t_0 with one minor variation. At a certain instance t=t_0 there is an initial condition which Mike considers to be designed and science considers to be due to regularity/chance. But the impact of this assumption on the scientific method to determine the impact of the initial conditions uses purely methodological naturalistic approaches. In fact Mike himself has stated that at t>t_0 all the processes are natural.

I am sorry to hear that you believe that these are pre-canned arguments since I have not really dealt with front loading in this detail before other than pointing out Murray's compelling arguments. One may not like the direction the conclusions of this thread seem to have taken but is that not the intention of this forum?

Btw I find your statements to be hovering on argument ad hominem, moderator or not. And since you raised them in public rather than in private I feel compelled to defend myself. Unless you want to open up a discussion which seems to be contrary to the spirit of this forum I suggest that you contact me in the future via private messaging.

Date: 2003/01/13 23:11:18, Link
Author: ExYECer
Now that the moderator at ISCID has removed my posting privileges at ISCID for no apparant reason, other than asking him not to attack me in public, I will be posting my response to Nelson on this forum. I hope that this forum will be less hostile to the scientific inquiry.

Date: 2003/01/13 23:14:34, Link
Author: ExYECer
Dear Nelson,

May I suggest that when you quote from Answers in Genesis that you at least provide for the correct reference? And it would be helpful that you would indicate that you are quoting from a secondary source.


The eye of a lobster (and some other 10-legged crustaceans1 including shrimps and prawns) shows a remarkable geometry not found elsewhere in nature—it has tiny facets that are perfectly square, so it ‘looks like perfect graph paper.’2

2. Hartline, B.K., Lobster-eye x-ray telescope envisioned, Science 207(4426):47, 4 January 1980.


And while at smaller resolution indeed this may seem to be perfect, reality shows that at high magnification they are hardly that perfect.

Another example of these 'perfect squares' in prawn shrimps

I would like to point out that my main objection is to Nelson's claims that no evolutionary pathways or selective advantages have been shown for the evolution of eyes, particularly in lobsters and Pecten. By relying on second hand resources which have been shown to be of doubtful accuracy in many other areas, Nelson has been furthering an argument for which he does not seem to have any first hand information. And yet he is willing to make claims that would go beyond what would be supportable without any knowledge of first hand sources. This has led Nelson to make such assertions as 'perfect squares' when in fact the photos show that they are hardly such. Other mishaps have been documented elsewhere and in the rest of this posting.

The moral of the story is that if one wants to argue that t is the failure of natural selection and RM alone to account for these eyes that opens the door to front-loading. one should be familiar with the actual evidence and not some second hand resource.

Nelson still seems to be unable to grasp the simple fact that the observation that Pecten needs to see in and outside the water may explain the selective advantage of an eye that can see in both environments. The difference between the Pecten and its precursors need not be that large when one realizes the likely pathways.


As noted in Chapter 1, the case of Pecten is fascinating. It has evolved an eye with two separate retinas placed
next to each other but separated from the tapetum and other elements at the posterior of the optical orb. The
tapetum has become a reflective mirror in a catadioptric optical system consisting of the objective group and the
tapetum. When the eye is immersed in sea water, the cornea is ineffective but the crystalline lens and the tapetum
combine to form a catadioptric optical system bringing light to focus on one of the two retinas. When the eye is
not immersed in sea water, the cornea of the objective group is effective and the cornea and crystalline lens operate
as a dioptric optical system with the other retina. This provides an animal living in an estuary with focused vision
under both aquatic and terrestrial conditions.


First of all the reflector. It should be noticed that


The tapetum sheet can evolve to form a variety of functions depending on the animal. It is generally a passive
layer. Normally, it can aid in the absorption of stray light that has passed through the retina. In some cases, it
consists of small groups of cells that act as a retro-reflector to direct light back through the retina. As seen in the
case of the mollusc, Pecten, the cells can also be used to form an optically coherent sheet of cells that form a
reflecting optical element in a catadioptric lens system.


Now the retina


The individual photoreceptors are similar in structure to those of Arthropoda, i.e., the chromophoric material is
found in rods exuded from the sides of the photoreceptor cells. .... Whereas the rhabdom of Arthropoda
exhibits a circular symmetry with respect to the centerline of the assembly, this is much less evident or
nonexistent in Mollusca. The limited data available indicates an orthogonal grouping of photoreceptor cells
to achieve a higher sensitivity to the polarization of the incident light.



It is the failure of natural selection and RM alone to account for these eyes that opens the door to front-loading.

In fact you have failed to show that RM&NS are a failure to account for these eyes so there goes your justification for front loading but you seem to still be unable to describe to us what front loading really is, other than not RM&NS..
Describe your front loading scenario in some detail please. And explain what mechanisms you have in mind that played out for times > t_0 (the time of front loading). I would encourage you to check out some of the works on Intelligent Design that would allow you to familiarize yourself with the concepts and their strengths and weaknesses.


3. You say: " Spondylus Until Nelson can show us from the original research papers what the eye of the Spondylus looks like as compared to Pecten we have no real way to discuss this."

You did this yourself:

In the Spondylus and Pectinidae, the eyes are quite well developed consisting of a cornea, lens and retina.

That suggests to me that Nelson is not familiar with the primary sources that describe the eyes of Pecten and Spondylus. In fact as far as I have been able to tell Spondylus eyes are not like Pecten eyes at all.

For instance from Ibid:

use of two separate optical forms within the available physical envelope in Pecten, including introduction of an
entirely new optical form to animal physiology--a catadioptric lens system.

Pecten seems to be unique in this aspect. Perhaps Nelson can present us in some more detail Spondylus? Which species of Spondylus btw is Nelson refering to?


Anyway, my discussion of front-loading and eye development center around pax-6. Pax-6 has remained more or less the same in most branches of life. In every animal examined that has eyes this gene proves to be involed in eye development. Moreover, it is necessary for function. In both Mice and flies pax6 mutations severely affect development of eyes. Drosophila has two closely related pax6 genes, eyeless (ey) and twin of eyeless (toy). Expresssion of either of these genes in antennal, leg and wing imaginal discs in drosophila causes well formed ectopic eyes. In xenopus embryos ectopic injection of pax6 into blastomeres causes ectopic eye-like structures. So not only is Pax6 required for eye formation, it is also the author of eye development. Pax-6 genes control both upstream and terminal functions in the gene network for eye development. Since pax-6 genes also exist in primitive organisms like sponges, a prediction of FL (Mike or Warren can correct me if I'm wrong) would be that it plays a non-essential role in these organisms.

So far so good so we have evidence that evolution shaped the expression of the Pax-6 gene as well as many other hox genes. But how does this show evidence of front loading? That essential genes have changed little over time but have been quite able to lead to different evolutionary shapes shows how a simple variation on the timing of embryological development may have a significant impact on the resulting form. But as with Mike Gene, Nelson is now painting an arrow around the bullseye by arguing that evidence of common descent is suddenly evidence of 'front loading'. Well we all agree that at t=t_0 pax-6 gene existed as an initial condition and how RM&NS played a role in shaping life there after. If Nelson wants to argue front loading then he surely has to accept the role of natural forces shaping evolution for t>t_0 or he is arguing for intervention rather than front loading. But while evolution can explain in a non ad hoc manner the evidence, Nelson seems to want to argue, without much evidence, that this front loading or initial condition required intelligent design. Yet in the actual discussion Nelson seems to be wavering between front loading and intervention and so far has been unable to provide us with the necessary details.

Nelson claims that the quote came from Denton but according to "Design in Nature by HARUN YAHYA" the reference in Denton is:


The eye of a lobster shows a remarkable geometry not found elsewhere in nature - it has tiny facets that are perfectly square, so it "looks like perfect graph paper."2

2. J.R.P. Angel, “Lobster Eyes as X-ray Telescopes”, Astrophysical Journal, 1979, 233:364-373, cited in Michael
Denton, Nature’s Destiny, The Free Press, 1998, p. 354

The reason I thought it was AIG was because your reference

Land, M.F., Animal eyes with mirror optics, Scientific American 239(6):88–99 1978 matches their
Land, M.F., Animal eyes with mirror optics, Scientific American 239(6):88–99, December 1978 remarkably well.

In either case your reference seems to be erroneous as far as I can tell. This can be avoided by actually reading the references from which one quotes. In fact relying on Denton is never a good policy, which is why I included some references to reviews which show some serious shortcomings in his interpretation of scientific work. The author may have refered to them as looking like perfect squares but I have shown they are hardly such, especially when realizing the relevant optical wavelengths.

Date: 2003/01/13 23:18:17, Link
Author: ExYECer
Was it because I cross reference



You seem to be confusing me pointing out that Mike's approach is nothing different from what science would do namely methodological naturalism to show that for instance cytosine deamination leads to preferentially hydrophobic coding codons. I am not 'defending. methodological naturalism, since it quite obviously needs no defending.

If Mike wants to argue that his argument of 'front loading' gives some utility then I agree but in the end both the presumption of a utility preloading or a natural preloading leads to the same finding about cytosine deamination.

But Mike went beyond the claim of utility to state that he set out to show why an engineer would use cytosine. His argument however seems to amount to painting the bullseye around the arrow.

I am more than happy to accept that there was a different starting point between Mike and scientific researchers into cytosine deamination but I am merely pointing out that the starting point does not seem to make much difference to the final conclusion.

His framework is indeed completely naturalistic for t>t_0 with one minor variation. At a certain instance t=t_0 there is an initial condition which Mike considers to be designed and science considers to be due to regularity/chance. But the impact of this assumption on the scientific method to determine the impact of the initial conditions uses purely methodological naturalistic approaches. In fact Mike himself has stated that at t>t_0 all the processes are natural.

I am sorry to hear that you believe that these are pre-canned arguments since I have not really dealt with front loading in this detail before other than pointing out Murray's compelling arguments. One may not like the direction the conclusions of this thread seem to have taken but is that not the intention of this forum?

Btw I find your statements to be hovering on argument ad hominem, moderator or not. And since you raised them in public rather than in private I feel compelled to defend myself. Unless you want to open up a discussion which seems to be contrary to the spirit of this forum I suggest that you contact me in the future via private messaging.

[Moderator Note: Archiving what one says is fine, but a line needs to be drawn between links that are useful and links that serve mainly to promote another site. The most important rule one learns in any Moderating 101 class is that you need to cut site-promoting at the knees.]

Emphasis mine.

It seems so, all my references to as an archive have been removed.

If ISCID cannot handle an honest scientific discussion then fine with me. Let's archive and document the behavior of this 'peer reviewed' resource in which the term reviewing seems to forget that actual comments made by peers should be taken into account before publishing ones 'papers'. Could this be related to the recent funding drives by ISCID? I wonder if references to free discussion websites are now to be frowned upon? Especially if such websites are of higher standard in many ways?

Or is it just a false premise to block my access to ISCID since the moderator has not deemed it necessary to remove the similar links in postings made by others. Oh the smell of double standards...

Date: 2003/01/14 22:53:30, Link
Author: ExYECer
I got an email from the moderator who claimed the following:


Frances wrongly smeared Nelson Alonso's face in the mud of Creationism by accusing him of referencing a creationist site known as Answers in Genesis.  Nelson was merely referencing Michael Denton.

It may be helpful to check out what I really said in what context though...

Actually Nelson was referencing an incorrect link for his claim and a search on the web showed that this was indeed the case. If Nelson uses a secondary source then he should quote "Land 1998 as quoted in Denton 1994" and not pretend that he has read the primary source.

Additionally it is interesting that I 'smeared Nelson Alonso's face in the mud of Creationism', surely that seems to be an interesting umbrella to hide under...

The moderator then continues to claim that


"mistake" that Frances made was to start promoting a particular website by providing an "Archived" link to a discussion board at which he was keeping an additional record of his posts.

Note the double quotes around "mistake". I find it fascinating how the moderator seems to be unable to do his job in an objective fashion.

It's fascinating to see how the moderators on ISCID seem to be focusing so much on dissenting posters.

In another message the moderator wrote


We won't deny it, we pay special attention to your postings (as well as about 5 or 6 others).  And for the most part, we let you get by with most of the things you say and even appreciate much of what you have to contribute.

Perhaps the moderator(s) may learn something from the balanced postings :-)

Date: 2003/01/15 11:34:07, Link
Author: ExYECer
Hi Nelson

I notice you seem to have abandoned our front loading discussions?

Oh well, lets see what else we can talk about

What "fact" of evolution are you referring to? Common descent is not a "fact" by any means.

Please explain because given the available data common descent is quite strongly supported by the data. The root of the tree may be quite difficult to determine but there are hundreds of millions of years of evidence of common descent out there. Perhaps you can explain to us, perhaps even in your own words, why you object to common descent as a fact?

Nelson in response to: Many prominent IDists, like Philip Johnson for example, argue that we detect ID because evolution could not have occured.  

Reference? Please quote Philip Johnson saying that we detect ID because evolution could not have occured.

ID is simply detected through elimination of chance and regularity. Hence ID has no positive approach to detection but merely relies on elimination.

Mike's hypothesis is certainly not lacking in detail, in fact it is quite detailed. For example, that multicellular life was evolved through unicellular life. It even offers a testable prediction, that genes, for example, essential in higher organisms would be found in primitive organs functioning in a non essential role. Or the C-T transition were incorporated by the designer to faciliate protein evolution. Many IDers also hold to the hypothesis that intelligent design occured at the origin of life. That the first cell was intelligently designed , followed by evolution thereafter. Many IDers put forth hypothesis about a lot of things.

That's also known as painting the bulls eye around the arrow. That many IDers put forth hypotheses has nothing to do with the lack of predictive power of ID. We have been discussing front loading with you, although you seemed to have become a bit shy on that topic.
Mike's 'predictions' are meaningless if he cannot explain the motives of a designer, the limitations of a designer, the moment of the design and when natural forces can explain the observations much better and in much less ad hoc fashion than ID. Front loading as I have shown has nothing to add to science by presuming without evidence that there was ID involved.

That ID has an empirical methodology that can detect design.

Interesting and can it detect intelligent design from non intelligent design? Of course not. ID's methodology is inherently flawed as has been shown now by a large number of scientists. ID's methodology is even flawed from the perspective of philosophy since it cannot separate front loading from intervention and thus can not claim to be a replacement for methodological naturalism.

Sure it is, it is an empirical design detector. You see? Everytime you say ID is nothing it is easily refuted by this simple fact. Check out "No Free Lunch" by William Dembski, you'll see what I mean.

Enlighten us since I have checked it out, and read it btw, and I fail to see that ID is something. Any particular items you would like to discuss in detail rather than glancing over them?

My question is quite simple, and if you cannot answer it then you are unqualified to say whether there is evidence of ID, or if ID is a valid scientific hypothesis. In fact, you cannot even speak of evolution if you cannot distinguish design from apparent design. So again, what would you cause you to infer design behind some biological system?

Are you qualified to speak about the eyes of a lobster since you seem to be unfamiliar with the primary research Nelson? But lets address your question.

First of all whether or not ID can ever become able to distinguish itself from purely natural methods does not mean that we cannot show that life on earth evolved. We are now talking about the observations and trying to explain the mechanisms. But ID has no mechanisms while science does provide mechanisms to explain the observations in a non ad hoc manner.

What would you cause to infer design behind some biological system?

I addressed a similar question by Mike Gene


Mike wants to know what data would cause me to suspect that evolution was front loaded. This would require the following 1. Mike needs to define what the purpose or goal(s) are of the intelligent agent 2. Mike has to show that given the chaotic and unpredictable nature of the world around us, this goal can be reliably reached 3. it can be shown that natural processes without intelligent design could not have achieved the state at t=t_0 4. it can be shown that there was indeed an intelligent agent present at t=t_0.

What would Nelson lead to infer design ?

Date: 2003/01/15 11:55:54, Link
Author: ExYECer
“And ev and others have shown that in principle mutation and natural selection are sufficient to increase information in the genome.”—Frances

But this is not in dispute, is it?

But it is, if you check out the works of the more vocal ID proponents it seems that the claim that evolution cannot increase information is quite prevalent. Phillip Johnson reviving the 'specter of Spetner' comes to mind as a good example.

I do not understand why you have such a hard time seeing the information increase in Schneider's experiments or Adami's experiments. It is a relatively straightforward calculation that shown that Shannon information increases during the experiment. You claim that the experiment is 'to match a target sequence'. That statement by itself suggests that you have not carefully read the work and the programs involved since there is no 'target' to match a particular sequence. It saddens me to see time after time Schneider's work be 'attacked' or misrepresented in this manner.

So once again: Show where in the program the object is to match a target sequence or the value of its information content.


This information, the target Rfreq, a population, recognizer, etc., etc., altogether must constitute some ponderable amt of information. What happened to the “zero information” we were going to begin with? Is any of this information, other than Rfreq, measured precisely by the experimenter?

Please explain the relevance of the target R_freq as it applies to the program, what lines of code do you consider enforce this target, what relevance does population or target recognizer or etc etc have other than to distract from the simple point that you still have not indicated any relevant objection. The information content of the genome is zero, that is random at t=t_t0. What other objections do you have that are relevant to the experiment? Schneider is modeling a biological system in an abstract way.

The final comment that there is teleology involved due to the fact that the experiment was to set out to see if R_seq can match R_freq shows a clear example of equivocation and irrelevancy.

Lets focus on this clearly visible 'smuggling' of information per suggestion of Janitor: Where was the information clearly inserted, support your claims with references to the programming code and show that they are relevant.

To give a few hints to show that Janitor is wrong:

there is no preset target to which R_seq has to evolve, in fact the experiments show that it varies significantly. When removing selection, R_seq does not change much so if it were population that played a role in inserting information, removing selection should not have made a difference. As for the etc. etc. without details of objections, one may just wonder about their relevance to the discussion.

So far people seem to object to Schneider's experiment but they seem to be a bit unfamiliar with the details etc etc :-)

As far as the dispute of complex specified information, Dembski was the one to raise the issue that Schneider had claimed that he had shown that CSI could increase through natural means.


As an example of smuggling in complex specified information that is purported to be generated for free, consider the work of Thomas Schneider. Schneider heads a laboratory of experimental and computational biology at the National Cancer Institute.


Dembski seems to agree that CSI was generated, he just considers the CSI to have been smuggled in.

Date: 2003/01/15 12:30:32, Link
Author: ExYECer
Archived from ISCID:


First of all, I would like to apologize to you if my reference to the AIG website caused you any distress. My comments led to me being banned from posting [on ISCID] for one day because among others,according to the email from one of the [ISCID] moderators, may have been interpreted as an attempt to 'smear[ed] [your] face in the mud of creationism'. If that's the case then I would like to point out that I had no intention to smear your face in such mud and will openly apologize if such impression was raised by my posting. Rather I wanted to point out that your quote of primary research was erroneous. It was my mistake that I concluded that you had quoted from AIG rather than from Denton.

I will from now on strengthen my attempt to be more patient in my replies in order to encourage an open discussion of ideas on this forum. I thoroughly enjoy our discussions here and I hope that others feel similarly.

Per [ISCID] moderator's suggestion I will also not provide any more links to the archive of my postings.

I hope that these steps can be helpful in generating an atmosphere in which we can discuss ID ideas in a constructive manner.

Date: 2003/01/15 22:39:15, Link
Author: ExYECer
Nelson on ARN claims that


I don't see how this is irrelevant. A reverse transcriptase is a special form of polymerase enzyme that uses an RNA template to make a DNA strand, whereas telomerase is an enzyme which functions to recognize the tip of a G-rich strand of an existing telomere DNA repeat sequence and elongates it in the 5'-to-3' direction, and is unique in that it carries it's own RNA template at all times.


This sounded a bit too non-Nelson to me and I checked and guess what? See This link

" reverse transcriptase is a special form of polymerase enzyme that uses an RNA template to make a DNA strand" Chapter 5, page 264 Figure 5-42

"A reverse transcriptase is a special form of
polymerase enzyme that uses an RNA template to make a DNA strand;"


"Telomerase recognizes the tip of a G-rich strand of an existing telomere DNA repeat sequence and elongates it in the 5¢-to-3¢ direction. "


"The telomerase is protein–RNA complex that carries
RNA template for synthesizing a repeating, G-rich telomere DNA sequence. Only the part of the telomerase
protein homologous to reverse"

Maybe someone may want to calculate the likelihood of coincidence


Date: 2003/01/21 15:25:05, Link
Author: ExYECer
Nancy Piercey

Touchstone Magazine (July/August 1999)

Design & the Discriminating Public
Gaining a Hearing from Ordinary People

Date: 2003/01/28 00:51:36, Link
Author: ExYECer

Good posting on the effects of the moon phase on the recapture experiments by Kettlewel

To get a better idea if there is any significance in the phases of the moon for Peppered Moth experimentation, I provide the following analysis of average daily capture rates of unmarked moths from the three experiments
correlated to moon phase:

Code Sample

Phase               B_53 B_55  D_55
New_Moon         NA   0.34   1.42
Waxing_Cresent NA   NA     0.62
First_Quart         NA   NA     0.58
Waxing               NA   NA     0.42
Full_Moon           0.96   NA     0.1
Waning               0.78   1.59     NA
Last_Quarter      1.1   1.3     NA
Crescent_Waning  NA   0.77     1.79
Average_Daily      56.45   35.31     17.67

(Figures are the average daily capture rate for the phase / average daily capture rate for the entire experiment. I have expressed the data this way so that the generally low capture rates in Dorset do not conceal
correlations. The data for New Moon, First Quarter, Full Moon and Last Quarter are the figures for the day of the New Moon (etc) plus the day on either side. Other figures are for all days between the flanking phases.
The one exception is the Full Moon phase for Birmingham, 1953 which includes the two days prior to the Full Moon. Raw figures and moon phase data is
available in Appendix A.)

The Dorset 55 data does seem to show a reasonable correlation of recapture rates and moon light with more moon light leading to lower recapture rates as the moon attracts the moths. Were the Birmingham experiments perhaps with phermone traps? Anyone knows?

I am archiving the appendix data here


Kettlewell's Captures of unmarked Biston Betularia in 1953 and 1955
experiments correlated for moon phase. Some 1955 Birmingham data appears
twice (once in each column) to assist comparisons. Data reads as

1955 | 1955/1953
Date| Catches | Date| Catches | Phase
12/6 | | 12/7| 100(89/6)| Last Quarter
13/6 | | 13/7| 29(25/4) |
14/6 | 17(0/16) | 14/7| 56(53/2) |
15/6 | 20(0/18) | 15/7| 24(20/2) |
16/6 | 37(1/34) | 16/7| 15(13/2) |
17/6 | 54(0/51) | 17/7| 22(20/2)|
18/6 | 30(0/29) | 18/7| 19(15/2)|
19/6 | 21(0/20) | 19/7| 6(5/1) |
20/6 | 43(0/41) | 20/7| 11(10/1)| New Moon
21/6 | 11(0/9) | | |
22/6 | 10(0/9) | | |
23/6 | ..... | | |
24/6 | ..... | | |
25/6 | 12(1/11) | | |
26/6 | 8(1/6) | | |
27/6 | 8(0/8) | 19/6 | | First Quarter
28/6 | 15(0/15) | | |
29/6 | 9(0/9) | | |
30/6 | 7(1/5) | | |
1/7 | 7(0/7) | | |
2/7 | 7(0/7) | | |
3/7 | ..... | 25/6| 9(8/0) |
4/7 | ..... | 26/6|144(124/14)
5/7 | 2 (0/2) | 27/6| 38(33/5) | Full Moon
6/7 | | 28/6| 25(21/3) |
7/7 | | 29/6| 57(50/6) |
8/7 | 74(62/7) | 30/6| 40(36/3) |
9/7 | 41(40/0) | 1/7| 64(57/7) |
10/7 | 53(48/3) | 2/7| 58(47/7) |
11/7 | 9(4/5) | 3/7| 69(52/8) |
12/7 |100(89/6)| 4/7 | 52(43/7) | Last Quarter
13/7 | 29(25/4)| 5/7 | 65(57/3) |
14/7 | 56(53/2)| | |
15/7 | 24(20/2)| | |
16/7 | 15(13/2)| | |
17/7 | 22(20/2)| | |
18/7 | 19(15/2)| | |
19/7 | 6(5/1) | | | New Moon
20/7 | 11(10/1)| | |

Kettlewell's total catches for 1953 and 1955 experiments.

1955 | 1955/1953
Date| Catches | Date| Catches | Phase
12/6 | | 12/7| 101(89/7)| Last Quarter
13/6 | | 13/7| 29(25/4) |
14/6 | 23(4/17) | 14/7| 56(53/2) |
15/6 | 34(2/27) | 15/7| 24(20/2) |
16/6 | 37(1/34) | 16/7| 15(13/2) |
17/6 | 68(7/58) | 17/7| 22(20/2)|
18/6 | 31(0/30) | 18/7| 19(15/2)|
19/6 | 29(2/26) | 19/7| 6(5/1) |
20/6 | 47(1/44) | 20/7| 11(10/1)| New Moon
21/6 | 16(1/13) | | |
22/6 | 19(5/13) | | |
23/6 | ..... | | |
24/6 | ..... | | |
25/6 | 12(1/11) | | |
26/6 | 12(3/8) | | |
27/6 | 8(0/8) | 19/6 | | First Quarter
28/6 | 21(1/20) | | |
29/6 | 14(0/14) | | |
30/6 | 16(4/11) | | |
1/7 | 11(2/9) | | |
2/7 | 7(0/7) | | |
3/7 | ..... | 25/6| 9(8/0) |
4/7 | ..... | 26/6|149(127/15)
5/7 | 9 (0/9) | 27/6| 40(34/5) | Full Moon
6/7 | | 28/6| 29(23/3) |
7/7 | | 29/6| 66(55/10)|
8/7 | 64(62/7) | 30/6| 42(37/3) |
9/7 | 85(73/11)| 1/7| 87(76/9) |
10/7 | 58(51/5) | 2/7| 92(75/13)|
11/7 | 59(50/7) | 3/7| 98(77/11)|
12/7 |101(89/7)| 4/7 | 77(66/9) | Last Quarter
13/7 | 29(25/4)| 5/7 | 81(73/3 |
14/7 | 56(53/2)| | |
15/7 | 24(20/2)| | |
16/7 | 15(13/2)| | |
17/7 | 22(20/2)| | |
18/7 | 19(15/2)| | |
19/7 | 6(5/1) | | | New Moon
20/7 | 11(10/1)| | |

Phases of the moon for 1953/1955

Lun# New Moon First Quarter Full Moon Last
---- ---------------- ---------------- ---------------- ------------
+0376 1953/05/13 05:06 1953/05/20 18:20 1953/05/28 17:03 1953/06/04
+0377 1953/06/11 14:55 1953/06/19 12:01 1953/06/27 03:29 1953/07/03
+0378 1953/07/11 02:28 1953/07/19 04:47 1953/07/26 12:21 1953/08/02
+0379 1953/08/09 16:10 1953/08/17 20:08 1953/08/24 20:21 1953/08/31

+0400 1955/04/22 13:06 1955/04/29 04:23 1955/05/06 22:14 1955/05/15
+0401 1955/05/21 20:59 1955/05/28 14:01 1955/06/05 14:08 1955/06/13
+0402 1955/06/20 04:12 1955/06/27 01:44 1955/07/05 05:28 1955/07/12
+0403 1955/07/19 11:34 1955/07/26 16:00 1955/08/03 19:30 1955/08/11
+0404 1955/08/17 19:58 1955/08/25 08:52 1955/09/02 07:59 1955/09/09


Date: 2003/02/03 23:11:50, Link
Author: ExYECer
I applaud the attempt to provide for a forum in which researcher can explore the possibility of a scientific theory of intelligent design. On the other hand I also have to agree with some of the others who have pointed out that the submissions to the journal seem to be of a disappointing quality so far and little evidence of peer review is suggested when comparing the original submissions and their final versions.

In fact, I wonder if the approach of anything goes is helpful to the investigations of Intelligent Design.

Date: 2003/03/04 00:46:09, Link
Author: ExYECer
A recent posting by Frances:

Dear John,

I am glad that you have returned despite your undoubtably busy schedule. You still seem to not agree with the plethora of evidence presented to you that genetic algorithms can and in fact do increase the hypervolume of possibilities. I apologize if my comments or arguments may be hard to understand, I will do my best to explain it as straightforward terms as possible. Others have shown other problems in your arguments so I will not focus on the issue of gene duplication and new function, something which seems quite prevalent in nature, nor will I follow the route of those who have challenged you to show that multicellulatity etc are truely examples of innovation. What I will be focusing on is the foundation of your argument namely that GA's cannot increase their hypervolume. While you have not formally defined the term hypervolume, I believe that from your claims one can make certain observations which if correct would contradict your claims. It may very well be that your definition of hypervolume differs from both how the term is used (although in limited form) or that your definition of hypervolume has evolved.

Basically the argument you are making is simple, a GA is limited by the number of parameters which are allowed to vary, these parameters determine the hypervolume in which the GA can search for solutions. If this is correct then it is trivial to show examples in which the number of parameters actually evolves with the GA. While such applications are relatively recent it may explain why your description of GA's seems to be somewhat outdated. It is perfectly common and acceptable than knew knowledge or previously unfamiliar facts can affect a hypotheses requiring it to be adapted or in worst case rejected. Since I believe that the basic foundation of your argument is that GA's cannot vary the hypervolume of their parameter space and since variations in the hypervolume are argued by you to be essential for creative inventions, one cannot reject GA's in one big swoop unlike the case in which one could argue that GA's cannot increase their hypervolume. So while your argument is correct for a limited class of GA's and thus your conclusions may very well be valid for such classes, I am not addressing whether or not the rest of your arguments are supportable, it also is clear that for a significant class of GA's the restrictions as formulated by you do not hold. This is not dissimilar from the findings that while the NFL theorems hold for a subclass of cases, they do not seem to hold for the classes which are most relevant. Now that I have laid out my conclusions based on what I believe to be your argument based on your own writings as well as based on the available research on GA's which addresses parameter space and hypervolume variations. Not only do I assert that GA's can increase their hypervolume but I have provided for several examples in which not only such GA's are shown in action but they arguable are shown to generate inventive and creative solutions.

So here we go:

I base most of my argument on your paper mentioned in my original posting referenced your paper which makes among others the following claims:


For each program, there is an n-dimensional hypervolume of possibilities in which that program operates, with n equal to the number of variable parameters. In the language of William Dembski's design inferential machinery, this n-dimensional hypervolume is equivalent to the reference class of possible outcomes, Omega 10.

This observation suggests that we may consider any genetic algorithm to be operating within a certain n-dimensional hypervolume, and certain fixed parameters completely determine that hypervolume ahead of time. Furthermore, any particular n-dimensional hypervolume is completely isolated and separate from any other m-dimensional hypervolume (m .ne. n).
and the most relevant one


The essential insight is that trial and error may only operate within a given hypervolume—but it may never jump to a new, higher-order hypervolume. The unbridgeable gaps between hypervolumes correspond to the technical contradictions in TRIZ theory.

This hypervolume is fixed by certain non-varying parameters (In Dawkin’s Biomorph example, the number of genes and the rules regarding how the integer values of each gene are interpreted) that an intelligent agent must set and which are not allowed to vary.
You then give some examples including the traveling salesman problem.


take-home lesson is that selection and mutation processes can operate within pre-set hypervolumes to find solutions that we know exist but which may be intractable given our current knowledge. However, they cannot find the hypervolume or the fitness function apart from intelligence—we still have to do the design work (getting the program into the right hypervolume where a solution may be found, and then finding the right fitness function over that hypervolume) before the algorithm can take over and sift through the vast possibilities to find a workable solution.
The examples shown by you indeed are good examples of a hypervolume or parameter space which is fixed but such GA's are a subclass of a much larger class of GA's which not only vary the values of their parameters but also the parameter space itself.

Now lets look at very similar words by Gero


in computational terms, can be defined as the designing activity that occurs when one or more new variables is introduced into the design. Processes that carry out this introduction are called “creative designing processes”. Such processes do not guarantee that the artifact is judged to be creative, rather these processes have the potential to aid in the design of creative artifacts. Thus, creative designing, by introducing new variables, has the capacity to produce novel designs and as a result extends or moves the state space of potential designs.
I provided a reference to a paper in which GA's are shown to generate better solutions to control problems than experts in these fields. They achieve this by not only searching parameter space but also higher dimensions of parameter space.

And once again the authors conclude that


This paper has demonstrated that genetic programming can be used to automatically create both the parameter values tuning and the topology for controllers for illustrative problems involving a two-lag plant and a three-lag plant.
So not only did the parameter values evolve but also the topology (hypervolume) itself. Other examples show how GA's can explore higher hypervolumes


It is expected that the performance of a circuit will fall with rising temperature, but Figure 5 reveals that the evolved circuit's behaviour also degrades as the temperature is decreased from 340mK. This kind of behaviour had never been seen in such proposed `single electron' circuits before, and indicates that the circuit actually exploits or relies upon the thermal noise of the electrons at 340mK. This is not necessarily desirable, and perhaps by evaluating across a range of temperatures during evolution a thermally robust solution could be found [7], but we see immediately that evolution is exploring a previously inaccessible part of design space/
Note that I am not arguing any specific examples in biology, others have done this and shown how the genetic toolbox seems to include variations in the parameter space.

Lets give another example out of many which shows how GA's can manipulate their parameter space


In designing a state space of possible designs is implied by the representation used and the
computational processes that operate on that representation. GAs are a means of effectively
searching that state space which is defined by the length of the genotype’s bit string. Of
particular interest in design computing are processes that enlarge that state space to change
the set of possible designs. This paper presents one such process based on the generalization
of the genetic crossover operation.
Adaptive Enlargement of State Spaces in Evolutionary Designing by JOHN S. GERO AND VLADIMIR KAZAKOV

A side comment about Davidson and genetic networks. I too have listened to a presentation by Davidson on the sea urchin and  the starfish. You state that "For instance, a starfish wiring diagram has some fundamental, deep-rooted differences from a sea urchin (which it is supposed to have evolved from), in terms of how genes are plugged into the network.". First of all it should be emphasized that starfish and sea urchins shared common ancestors. It's like saying that we evolved from apes rather than the more correct "apes and humans share common ancestry".

Some quotes from hist talk on 2/12/03 where Davidson presented some of the latest findings.

When looking at a genetic network slide for the sea urchin Davidson comments

endomesoderm formation in the sea urchin and I want to consider a piece of the network as it exists in another animal the starfish which diverged 500 million years ago.
two thing to discuss: part of the regulatory network that is responsible for skeletogenesis, how did the change happen?Endoderm gut formation, sea urchins and starfish have very similar processes here.

Three gene positive regulatory loop, system cannot revert. Multi gene loops are found in many gene loop networks 'drosophila', 'hox network" common loops. New invention the micromere. Making micromeres and skeleton is new, not having it is old. So skeletons happened since the divergence of the sea urchins and the starfish. Fate maps of both embryos, missing a skeleton but spatial relationships are pretty much similar with some minor differences.
Similar networks in gut formation for sea urchin and star fish, Foxa, Brachyary are examples of very similar genes. Now perturbation analysis on other genes was applied. Tbrain is involved in skeletogenesis, one of the regulatory genes that run the downstream skeleton. Chance to acutally look at the process of evolutionary change. In sea urchin it is used in endoderm, in starfish it is used in skeleton. Tbrain under the same regulatory controls as the other genes. So what did they find out? What remained the same in enormous detail. The same Krox gene activates the same Otx gene and has the same feedback relation with GataE and feedback to OTX. Forward drive feature, which has not changed in 500 million years. What is different? All of the connection to Tbrain are entirely changed, confined to two cis-regulatory elements, tbrain was totally rewired got a new cis-regulary control and got destroyed in the endoderm. Comparative investigation of cis-regulatory genes can help us understand how this all happened.


Lots of sculpting can be done by moving repression around. Tbrain was used in the gut first and now is in charge of the skeleton. Gene battery for skeleton came under tbrain control. I can make a scenario with a few changes of how this could have happened. Davidson provided a scenario mainly based on repression which may explain these morphological changes. It's a testable hypothesis. It's a hard problem: how do the kinds of multiple cis regulatory elements that are strongly interrelated appear in evolution. The traditional argument has been that GC/AT basepair changes can make surpressors but this is insufficient for more than single sites. Next argument: cis regulatory genes migrate by transposition: happens but where do they come from originally? It's hard to make a convincing case. So what other mechanisms could be responsible for constructing cis-regulating elements? Characteristics of these networks is their plasticity to rewiring.

Some relevant articles
"A regulatory gene network that directs micromere specification in the sea urchin embryo." Oliveri P, Carrick DM, Davidson EH.


To generate the echinoid system would then require only that the pmar1 gene (itself a member of a gene family; our unpublished data) be brought under the control of maternal factors localized at the pole of the egg, and that a single key regulatory link between it and the gene encoding the global repressor be installed. This evolutionary hypothesis suggests that despite its great elegance, the whole micromere specification system that we see in Fig. 7 is basically a jury-rigged add-on, which except for the role of pmar1, is all made of preexistent parts. Whatever its connection with evolutionary reality, the argument suggests that comparative network analysis will someday provide the means to test directly the pathways of regulatory evolution, so that we can understand not only how developmental systems work, but how they got that way.
Carl: When asked how ID would solve the problem you seem to not provide for any testable or quantifiable scenarios

There may be other proposals to account for many simultaneous mutations, but I have not been able to find one. That would leave ID as an excellent candidate, as it fits all the evidence.
So far I have yet to see any ID scenario so whether or not it fits 'all the evidence' would surely seem to be begging the question. Perhaps if you could spend some time on elaborating why you believe ID is an excellent candidate rather than assert it then we can actually determine for ourselves if there is some foundation to your claims.

When asked about the reason why humans should be the goal Carl suggests that this is because of his religious beliefs. But all life is created in the image of God, including us. Evidence of a designer would be helpful in supporting a belief, not necessarily a Christian one though but so far the evidence of such a designer seems to be what seems to be lacking. Attempts have been made to infer its existence through negative evidence but they seem to all have failed so far. As a Christian I do believe that God created us and all life in His image and we just happened to be that part of His Creation which evolved to become able to appreciate in fuller detail His Creation. But we are getting into religious speculations here.

Yersinia, thanks for your extensive links which approach the whole discussion from a very different side namely by showing that evolution and gene duplication appear to be able to be quite inventive and creative. Changes in timing seem to be one way of development of evolutionary novelties.

ASCSCommanding, your approach is quite interesting in that you show that the difference between us and other organisms is contained in the dimensionality and ordering of the billions (?) of basepairs, something which is not by itself beyond at least the theoretical range of GA's. Indeed, without knowing the predecessors, it may be hard to identify what is truely inventive. A similar problem seems to apply to CSI, which seems to require historical knowledge. An interesting parallel.