AE BB DB Explorer


Action:
Author:
Search Terms (separate with commas, no spaces):


form_srcid: RBH

form_srcid: RBH

form_cmd: view_author

Your IP address is 107.20.30.170

View Author detected.

view author posts with search matches:

Retrieve source record and display it.

form_author:

form_srcid: RBH

q: SELECT AUTHOR, MEMBER_NAME, IP_ADDR, POST_DATE, TOPIC_ID, t1.FORUM_ID, POST, POST_ID, FORUM_VIEW_THREADS from ib_forum_posts AS t1 LEFT JOIN (ib_member_profiles AS t2, ib_forum_info AS t3) ON (t1.forum_id = t3.forum_id AND t1.author = t2.member_id) WHERE MEMBER_NAME like 'RBH%' and forum_view_threads LIKE '*' ORDER BY POST_DATE ASC

DB_err:

DB_result: Resource id #4

Date: 2002/11/28 23:02:57, Link
Author: RBH
Mr. Davies suggested that
Quote
It would even be fascinating to have them say that their could be more than one designer.

I tried that on ISCID.  It stimulated some responses, and I may yet do some more on it when I have time.

RBH

Date: 2002/12/02 21:29:28, Link
Author: RBH
Last month I gave a colloquium on ID for high school biology teachers, and as a followup, I've been sending them resources.  In aid of that, I put together a short essay, something like theyeti's.  I plan to distribute it to them in the not-to-far distant future.  Comments/critiques welcome.

***************
In his post on ISCID John Wilkins wrote
Quote
Let us begin by asking how science tells us anything at all. In my view, science is the recognition of patterns in data, and the generation of models that are adequate to delivering those patterns as explanation. The information in science, the "signal" from the physical world, is the information of measurement - Fisher Information, AKA the Cramer-Rao Bound (which is, roughly, where the second derivative of the estimate of the accuracy of a measurement is zero). In my view, science is induction from data, and the models retain the information content of the measurements just to the extent they are accurate. (Note: induction may not be a justification of models, but it sure as #### is the way we gather our data together so we can make reliable inferences; still, let's not open that can of undergraduate Humean worms.) The information content of a scientific explanation is just the preserved accuracy of the data in the model.

Anything that we know through science we know from empirical data. So a design inference has to be not only consonant with data, but licensed by the patterns that exist in the data. To be achievable, we need to understand (that is, have a model of) design and designers.  (emphasis added)


And consider this from my OP in the Multiple Designers Theory thread on ISCID
Quote
D. There is a finite and limited number of multiple designers.  This premise is more difficult to support by empirical evidence than the others, but it is logically necessary to prevent the MDT enterprise from degenerating into a mere list of designed phenomena, a cosmic oddity shop of designs. Scientific theories condense (superficially) disparate phenomena into similarity classes and explain the behavior of instances of the classes by invoking general principles and laws that refer to those classes rather than to individual instances. If the number of designers is unlimited then in the limit each class would have just one member, and (since in that case no multi-member classes exist) no general laws are possible and therefore there is no science. It is logically possible that there is an infinite number of designers, but in that case no scientific study of design is possible. It is therefore a scientifically sterile speculation. (emphasis added)


As I read them, IDists argue that the Explanatory Filter (nowadays pretty much reduced to the assertion that IC structures or processes cannot be produced by evolution) detects a property (improbability) of some objects or processes in the world, and therefore (since that property is asserted to be shared by a set of objects and/or processes), there is a class of natural-world phenomena, a class defined solely by improbability, that evolutionary theory can't explain because it is not a similarity class in the terms of models based in evolutionary theory.  Their only common property, improbability, does not define a class that enters the theoretical laws and models of evolutionary biology.  Therefore the class is asserted to require some other kind of explanation, an intelligent design explanation.

There have been two general kinds of counter-arguments offered in the various critiques of current approaches to ID.  One focuses on the probability estimates.  An important component of this critique is the argument that "improbability" is not a property of an object or process but rather is a characterization of an object or process with reference to some probability density function (PDF), and the alleged improbability of some biological structures and processes is in large part due to an inappropriate choice of PDF for the estimates.  Thus the 'class' formed by highly improbable objects like bacterial flagella and blood clotting biochemical cascades is an artifact of the (idiosyncratic, unjustified) choice of PDF rather than being some intrinsic property of the phenomena.  "Probability" is not an inherent property of an instance; it is a description of a relation between an instance and a PDF.

The second general sort of counter-argument is to the effect that allegedly unevolvable structures and processes (whether the probabilities are correctly estimated or not) can in fact be accommodated in classes appropriate to causal models from evolutionary biology, and are therefore explained by those models.  Thus it is argued that while "direct" incremental evolution of some biological structure may not be possible, indirect routes (cooption, scaffolding, etc.) can account for the naturalistic evolution of the objects and processes.  In addition, "direct" incremental evolution may be actually be possible given that different evolutionary operators induce fitness landscapes with very different topographies, so what appears to require saltational cliff-climbing on one fitness landscape might be simple one-step-at-a-time incremental evolution on a gentle slope of another landscape.  On those arguments the probability estimates do not define a class of phenomena that must be explained in some way that evolutionary theory doesn't provide.

In addition, ID has serious explanatory problems.  By explicitly disavowing conjectures about the number and nature of purported intelligent designing agents and by avoiding hypotheses about the means by which abstract designs are transmitted to or implemented in matter and energy, IDists deliberately eviscerate their ability to provide a scientific explanatory model.  With no hypotheses about designers or mechanisms of design implementation, nothing holds the class of improbable structures and processes together except the (inappropriate) probability estimates.  The examples that have been offered have no properties in common but the probability estimates.  They constitute a mere oddity shop of disparate phenomena bound together by nothing but purported improbability.

The class of allegedly improbable structures and processes offered in the ID literature floats alone in empty conceptual space, unconnected to any causal or correlational explanatory model.  The class of improbable phenomena is not part of a relational structure of classes to which natural (or non-natural) laws and generalities apply.  No laws or generalities have been offered by IDists that go beyond a mere claim of the existence of a class of improbable instances.  IDists offer no testable hypotheses about relationships between the class of purportedly improbable instances and anything else in or out of the world of physical matter and energy, and so there is no scientific explanatory power in ID.

****************
btw, I note with fascination that this board censored John's use of H***!

RBH



Date: 2003/01/14 23:09:45, Link
Author: RBH
Francis's email from the ISCID Mod:
Quote
second
"mistake" that Frances made was to start promoting a particular website by providing an "Archived" link to a discussion board at which he was keeping an additional record of his posts.
Uh oh!  I'm in trouble.  I just linked to a whole thread here in a post on ISCID.  :)

RBH

Date: 2003/01/22 12:15:11, Link
Author: RBH
Pearcey wrote
Quote
The second reason design is a winner is that it is a full-fledged scientific research program, not a narrowly conceived ideological position. As soon as one stakes a movement on some narrowly conceived position, there is a danger of splintering off into antagonistic groups and disagreeing over the details. For too long, opponents of naturalistic evolution have let themselves be divided and conquered over subsidiary issues like the age of the earth. The beauty of design is that it can unite everyone who opposes the broad, overarching claim of naturalism while providing a common framework for working on subsidiary issues as allies.

As several people have remarked, should the ID movement carry the issue some day, the very next day (metaphorical) blood will flow in the halls of the DI as the enemies of my enemy suddenly become my enemies.

RBH



Date: 2003/01/25 12:48:20, Link
Author: RBH
Ipetrich wrote
Quote
And self-recognition may be a byproduct of a more general mental-modeling capability, with the model being of oneself.

I have  a vague memory of a view of consciousness/self in which the model is oneself.  That is, the "self" that is our awareness consists in the mental model built through the interactions of the perceptions of inputs and the perceptions of one's own behavior in the light of those inputs during some developmental period.  So self-recognition would be the model 'recognizing' its own existence!  Recursion, anyone?

RBH

Date: 2003/01/28 13:08:30, Link
Author: RBH
"Primacy," maybe, but not sole ownership - don't forget MDT! :)  

I am pursuing (as time allows) some of the designer-discrimination methodological ideas associated with MDT.  As I said in my post in response to Mike Gene's tantrum on ARN, that's not a frivolous exercise.

RBH

Date: 2003/02/03 12:50:26, Link
Author: RBH
charlie d wrote
Quote
It would be interesting to have a breakdown of articles submitted/accepted, and of the amounts of revisions required per article.  The few articles I paid any attention to seem to have undergone hardly any revision at all between appearance in the archive and actual publication, regardless of comments appearing on Brainstorms (and some were quite significant! ).

I noticed that, too, and wondered about it.  I spent some non-trivial time and effort on one paper to no visible effect.  There was no response from the author on Brainstorms, and the paper appears to have been published essentially as submitted.  If part of the PCID "peer review" process is associated with the submissions being posted on Brainstorms for comment, it is having no visible effect on what ends up being published.  Hardly seems worth doing, in fact.

RBH

Date: 2003/02/20 10:01:10, Link
Author: RBH
They picked one of the most regressive of the YEC's to lecture.  From here:
Quote
Evolution is a theory: it is not science. In fact evolution contradicts science: it contradicts the laws of thermodynamics, the law of biogenesis, laws of probability, and the law of cause and effect. It is based on natural mutations (which stand as witnesses against it rather than proof for it). The only so-called material evidence for evolution, the fossil record is in fact evidence against evolution, because the complete absence of transitional forms concludes that the fossils clearly say NO to evolution. The catalogue of scientific deceptions that are used to present the missing links or transitional forms, go to show that evolution is a non-starter. These range from the famous Piltdown Man to the recent fake missing link between birds and dinosaurs supposedly smuggled from China and described in ?scientific? details in the National Geographic Magazine only to be exposed 4 months later as a blatant forgery.
RBH



Date: 2003/02/22 00:02:39, Link
Author: RBH
I particularly like this one:
Quote
Christian Creationism is controlled by those who are doctrinally wedded to Zionist Dispensational goals. This marriage has blinded the Creationist leadership to the fact that both the Zionist and the Dispensational concepts come from that same 13th century anti-Christ Kabbalist source as did Relativism, Big Bangism, and the Expanding Universe concepts. Add it up!

RBH

Date: 2003/02/27 11:52:43, Link
Author: RBH
This thread is to archive material relevant to the ID argument that evolutionary algorithms in general, and genetic algorithms in particular, either cannot generate "new information" or somehow show that an intelligent designer is necessary in order for an EA to generate new information.  It will include relevant postings from other boards as well as summaries and references to the appropriate literature.

I'll start it with a posting by Francis on ISCID's Brainstorms (a posting I was in the process of sporadically cobbling together until Francis anticipated me :))  In this posting Francis is responding to John Bracht's contention that (according to the TRIZ model of innovation), there are two sorts of inventions, routine and innovative, and that evolutionary processes can generate the former but not the latter.

Francis wrote as follows:

After having established that Genetic algorithms can indeed increase their hypervolume and thus cannot be in one grand swoop be excluded from being able to generate innovative/creative designs it may be interesting to explore if there may be some examples of such. Such a project however is complicated by the vague definitions of innovative/creative as used in TRIZ and thus on how to recognize creative/innovative solutions from routine design. In order to at least provide some foundation allowing us to define the various forms of design lets discuss the various forms of design.

Gero distinguished between routine and non-routine design. Routine design involves instances in which all necessary knowledge is available or more formally
Quote
...that designing activity which occurs when all the knowledge about the variables, objectives expressed in terms of those variables, constraints expressed in terms of those variables and the processes needed to find values for those variables, are all known a priori.

Source: MASS CUSTOMISATION OF CREATIVE DESIGNS John S. Gero

Gero points out that in addition routine design limits the available range of the variables.

Gero identifies two forms of non-routine designing:

Innovative designing and creative designing.
Quote
Innovative designing, in computational terms, can be defined as that designing activity that occurs when the constraints on the available ranges of the values for the variables are relaxed so that unexpected values become possible,

Innovative designing produces designs that belong to the same class as their routine 'brothers' but are also 'new'.

Creative designing:
Quote
in computational terms, can be defined as the designing activity that occurs when one or more new variables is introduced into the design. Processes that carry out this introduction are called "creative designing processes". Such processes do not guarantee that the artifact is judged to be creative, rather these processes have the potential to aid in the design of creative artifacts. Thus, creative designing, by introducing new variables, has the capacity to produce novel designs and as a result extends or moves the state space of potential designs.

Lets look at the following paper

"Automatic Creation of Human-Competitive Programs and Controllers by Means of Genetic Programming" by Koza et al.

Abstract:
Quote
Genetic programming is an automatic method for creating a computer program or other complex structure to solve a problem. This paper first reviews various instances where genetic programming has previously produced human-competitive results. It then presents new human-competitive results involving the automatic synthesis of the design of both the parameter values i.e., tuning and the topology of controllers for two illustrative problems. Both genetically evolved controllers are better than controllers designed and published by experts in the field of control using the criteria established by the experts. One of these two controllers infringes on a previously issued patent. Other evolved controllers duplicate the functionality of other previously patented controllers. The results in this paper, in conjunction with previous results, reinforce the prediction that genetic programming is on the threshold of routinely producing human-competitive results and that genetic programming can potentially be used as an "invention machine" to produce patentable new inventions.

Koza provides us with two examples in which GA's were used to file innovative design patents
Quote
There are at least two instances where evolutionary computation yielded an invention that was granted a patent, namely a design for a wire antenna created by a genetic algorithm and a patent for the shape of an aircraft wing created by a genetic algorithm with variable-length strings.

Koza continues with a table of 24 examples of "results where genetic programming has produced results that are competitive with the products of human creativity and inventiveness."

15 of these 24 examples involve previously patented inventions, 6 infringe on patents and one improves on a patent. Nine duplicate the functionality of the patent in a novel manner.

The question remains, are these examples of routine or creative/non-routine design?

Koza specifies twoways of running GA's

There are two ways of determining the architecture for a program that is to be evolved using genetic programming.

1 The human user may prespecify the architecture of the overall program as part of the preparatory steps required for launching the run of genetic programming.

2 Architecture-altering operations may be used during the run to automatically create the architecture of the program.

Koza continues on to apply GA to a controller problem in the following manner
Quote
In this paper, programs trees in the initial random generation generation consist only of result-producing branches. Automatically defined functions are introduced sparingly on subsequent generations of the run by means of the architecture-altering operations.

The two lag plant:
Quote
As will be seen below, the result produced by genetic programming differs from a conventional PID controller in that the genetically evolved controller employs a second derivative processing block. As will be seen, the genetically evolved controller is 2.42 times better than the Dorf and Bishop 28 controller as measured by the criterion used by Dorf and Bishop namely, the integral of the time-weighted. absolute error . In addition, the genetically evolved controller has only 56% of the rise time in response to the reference input, has only 32% of the settling time, and is 8.97 times better in terms of suppressing the effects of a step disturbance at the plant input.

The three lag plant:
Quote
As will be seen below, the controller produced by genetic programming is better than 7.2 times as effective as the textbook controller as measured by the integral of the time-weighted absolute error, has only 50% of the rise time in response to the reference input, has only 35% of the settling time, and is 92.7 dB better in terms of suppressing the effects of a step disturbance at the plant input.

In both instances the controller included P, I and D, or proportional constants, integrators and differentiators and the genetic algorithm was allowed to vary its hyperspace by including one or more of each. Not surprisingly the program re-discovers the PID and PI topology as invented by Callender et al.

They conclude
Quote
This paper has demonstrated that genetic programming can be used to automatically create both the parameter values tuning and the topology for controllers for illustrative problems involving a two-lag plant and a three-lag plant.

Thus not only did the GA control the parameter values but also the topology allowing the GA to vary the hyperspace.

But not only did the GA find solution but the solutions were better than the best solution provided by experts in the field of control technology.

A propos, Kroo, one of the inventors who patented design in which GA's were used comments that "This configuration was independently "discovered" by a genetic algorithm that was asked to find a wing of fixed lift, span, and height with minimum drag. The system was allowed to build wings of many individual elements with arbitrary dihedral and optimal twist distributions. The figure below depicts front views of the population of candidate designs as the system evolves. On the right, the best individual from a given generation is shown. "

Adrian Thompson describes in "Notes on Design Through Artificial Evolution: Opportunities and Algorithms" an experiment of the design of an electronic circuit in which it was attempted "to allow evolution to explore the design space as a type © system, with the minimum or simplifying constraints or prejudice."

A type © system refers to a system in which neither the forward nor inverse model is tractable.
Quote
It is expected that the performance of a circuit will fall with rising temperature, but Figure 5 reveals that the evolved circuit's behaviour also degrades as the temperature is decreased from 340mK. This kind of behaviour had never been seen in such proposed `single electron' circuits before, and indicates that the circuit actually exploits or relies upon the thermal noise of the electrons at 340mK. This is not necessarily desirable, and perhaps by evaluating across a range of temperatures during evolution a thermally robust solution could be found [7], but we see immediately that evolution is exploring a previously inaccessible part of design space. Desirable or not, it is obvious that evolution is exploring new design space.

Finally a paper which I believe I have already mentioned but which captures much of my argument

JOHN GERO AND VLADIMIR KAZAKOV, "ADAPTING EVOLUTIONARY COMPUTING FOR EXPLORATION"
Quote
Abstract. This paper introduces a modification to genetic algorithms which provides computational support to creative designing by adaptively exploring design structure spaces. This modification is based on the re-interpretation of the GA's crossover as a random sampling of interpolations and its replacement with the random sampling of direct phenotype-phenotype interpolation and phenotype-phenotype extrapolation. Examples of the process are presented.

And here the relevant part
Quote
Non-routine designing maps onto creative designing. In routine designing all the variables which specify designs are given in advance. This means that the space of possible designs is known a priori, each point in this space can be constructed and evaluated directly. What needs to be done is to search this space in order to locate an appropriate or most appropriate design. The result here is the "best" design from this space. In nonroutine designing the result is the "best" space of possible designs as well as the "best" design from this space. Processes which modify the design space of the search problem are called exploratory processes.

Gero comments
Quote
One of the well-established notions related to creative designing processes is that an important means of characterising them is to determine whether they have the capacity to expand the state space of possible designs - exploration (Gero, 1994).
And finally
Quote
As can be seen from the example the resulting designs are unpredictable in the sense that they are unexpected given only knowledge of the original designs and of the interpolation/extrapolation functions. In this sense the process matches well the meaning of exploration both in the technical sense used in this paper and in the natural language sense. The designs produced by the system demonstrate both the novelty and unexpectedness of what can be generated.

It seems that John was correct in pointing out that creative design requires one to leave the hyperspace of the original and explore different design spaces. As I have shown however, GA's are very capable of doing exactly this, exploring hyperspace by varying the dimensions of the search space. As such I would argue that not only do GA's have the potential for innovative/creative solutions but have actually been shown to exactly produce such designs.

Date: 2003/02/27 11:58:13, Link
Author: RBH
AS an addendum to Francis's Brainstorms posting archived above, this is the syllabus for Gero's course in Computational Models of Creative Design: Theory and Applications, which contains a number of appropriate references.  I accessed it on February 27, 2003.  If it disappears from the Web I have it on disk and will supply it at need.

RBH

Date: 2003/05/24 14:44:50, Link
Author: RBH
Two contiguous posts from the ISCID thread on the paper referenced above:
Quote
Posted by RBH (Member # 380) on 23. May 2003, 18:17:

Just to remind us all of the 'canonical' definitions of irreducible complexity, these are from ISCID's Encyclopedia:

Irreducible Complexity

Michael Behe's Original Definition:

A single system composed of several well-matched, interacting parts that contribute to the basic function of the system, wherein the removal of any one of the parts causes the system to effectively cease functioning. (Darwin's Black Box, 39)

William Dembski's Enhanced Definition:
A system performing a given basic function is irreducibly complex if it includes a set of well-matched, mutually interacting, nonarbitrarily individuated parts such that each part in the set is indispensable to maintaining the system's basic, and therefore original, function. The set of these indispensable parts is known as the irreducible core of the system. (No Free Lunch, 285)

[With a link to "irreducible core"]
Irreducible Core

The parts of a complex system which are indispensable to the basic functioning of the system.

Michael Behe's "Evolutionary" Definition
An irreducibly complex evolutionary pathway is one that contains one or more unselected steps (that is, one or more necessary-but-unselected mutations). The degree of irreducible complexity is the number of unselected steps in the pathway.


The first definition, Behe's original DBB formulation, is clearly an ahistorical one. There is no reference to the past or the pathway to the state of ICness so long as we interpret "basic function" to mean "current function" and assume that a system performs only one function or, if it performs more than one function, we can tell which is "basic." It is also the definition that specifies the operation necessary to classify a system as IC: the knockout procedure. "Interacting" can also be operationally determined by observing correlations between the behaviors of parts. The vagueness is in the term "well-matched." There is no way mentioned in the definition (nor elsewhere in DBB) for 'well-matchedness' to be measured. Hence operationally - that is, experimentally - we have only the knockout procedure and identifying interactions on which to determine IC or not-IC. On Behe's first definition, the programs that evolved to perform EQU meet the two operational criteria - knockout loss of function and interactions. Only the ill-defined "well-matched" stands between the programs and ICness.

Dembski's refinement of Behe's definition introduces two new elements: "basic, and therefore original, function" and "nonarbitrarily individuated parts." The first addition's reference to "original function" introduces history. In order to classify a system as IC we must know that the current function of some system was also its original function. The effect of this move is to definitionally eliminate cooption (which we know to be common in evolution) as a route to an IC system. Hence this definition is restricted to only those systems in which we know cooption did not play a role in the evolution of the system. This definition, in its reference to "irreducible core," preserves the knockout criterion.

The second addition in Dembski's definition is ambiguous. It is a negative prescription ('do not pick parts arbitrarily') but gives no guidance on what is non-arbitrary. In his NFL example of the flagellum, Dembski works with two levels. There's the 'parts of an outboard motor' level - power source, rotor, propellor - and the level of calculation - proteins. There is no clear justification for which level of parts to use for what part (!) of the definition; the choice seems to be arbitrary.

The programs that evolved to perform EQU do not meet Dembski's definition of ICness, since the final function performed by those programs is not the "basic, and therefore original" function. They coopted other functions. While some of those precursor functions are also performed by the final programs, other precursors were sometimes lost along the way. Hence the "original" functions were not always present in the final program.

Behe's "evolutionary" definition also invokes history. It requires that we know the complete pathway by which a candidate IC system evolved, so we can count the number of "unselected steps." This is also interesting for introducing the notion that "irreducible complexity" can take on values other than 0 or 1: "The degree of irreducible complexity is the number of unselected steps in the pathway."

By this "evolutionary" definition the programs that evolved to perform EQU are IC to some degree, since every step on the path to the programs that performed EQU was not "selected." In fact, some steps in at least some of the lineages leading to the final programs were deleterious and hence were selectively disadvantageous - there was selective pressure against them. Hence they display some degree of irreducible complexity.

Thus depending on the definition one chooses, the programs are IC, not IC, or IC to some degree, and we have no guidance in deciding which it is. Therefore unless and until Behe/Dembski, et al settle what IC means, it is useless from the point of view of doing meaningful research.

RBH

and
Quote
 Posted by Argon (Member # 276) on 24. May 2003, 12:06:

RBH writes:
Dembski's refinement of Behe's definition introduces two new elements: "basic, and therefore original, function" and "nonarbitrarily individuated parts." The first addition's reference to "original function" introduces history. In order to classify a system as IC we must know that the current function of some system was also its original function. The effect of this move is to definitionally eliminate cooption (which we know to be common in evolution) as a route to an IC system. Hence this definition is restricted to only those systems in which we know cooption did not play a role in the evolution of the system. This definition, in its reference to "irreducible core," preserves the knockout criterion.

You know, this definition would probably remove things like blood clotting, a large chunk of the immune system, and maybe even parts of the flagellum from the list of IC systems. For example, the blood clotting cascade is composed of numerous proteases that bear striking similarities to other proteases that are both ancestral to the clotting system and which have different "functions" in the cell. Thus the "original function" of most of the components had nothing to do with clotting. Ditto with the immune system. The current functions of flagellar components are mostly propulsion and cell adhesion, but parts of this system might have originated from a protein translocation system or pore. And so the "original functions" of the flagellar system components might not have been the same. How does one actually determine such things in ancient, ubiquitous systems that have undergone strong selection before diversification?

Obviously (to biochemists at least), it's a practical impossibility to be sure of the "original" function of any component. I think Dembski has a Platonic and "separate creation" view of organisms and biology. Ernst Mayr knocks that viewpoint down in a series of books.

In ruling out the possibility of co-opting other components Dembski seems to convert the IC definition into the truism that unevolvable IC systems are unevolvable. After all, does Dembski think that cells acquire large stretches of DNA that spontaneously appear out of nowhere and have no past? Yes, if one wanted to describe a system that had no past as "IC", one could do it. But then that definition would have little to do with the mechanisms by which evolution operates and would thus be orthogonal to the important question of evolvability.

RBH also wrote
Thus depending on the definition one chooses, the programs are IC, not IC, or IC to some degree, and we have no guidance in deciding which it is. Therefore unless and until Behe/Dembski, et al settle what IC means, it is useless from the point of view of doing meaningful research.

Why should they be the final arbiters of what is and isn't IC? Behe spent plenty of time writing his book and developing his ideas. His whole thesis fundamentally relies upon the abilty to properly identify IC systems. Dembski also had a long time to develop a mathematical "model" of ICness. They have both observed and participated in many discussions about these definitions and the problems associated with their various criteria. They have also given many talks on the subject. Since both men understand the crucial importance of having useable, workable guidelines when performing research, particularly in a new area, I have little doubt that they would have let all these years go by without already presenting all that they could on this subject about the clarifications and important distinctions.

IC was first defined in Behe's book, DBB. As RBH mentions, it was an ahistorical definition. All subsequent changes made by Behe and Dembski require historical knowledge about a system and thus substantially change the nature of evaluation. I cannot see any good reason why they should bear the "IC" moniker without a subheading to indicate the additional criteria that have been met. For example, it can sometimes be simple to apply Behe's original, operational criteria to determine whether a system is IC. But what this does not tell us is whether such a system was evolvable. Thus we potentially have two classes of IC systems: evolvable or unevolvable. Once a system is determined to be IC, we can now apply additional tests to determine the subclass to which it belongs. Until then it should be placed in a third subclass: "evolvability unknown". Here is how I see the current organization:

Class: IC
* Evolvability status:
* Unknown
* Evolvable

* Criteria type 1: Intermediate steps reproduced.
* Criteria type 2: Similarity to other evolvable systems.
* Criteria type 3: etc....
* Unevolvable
* Criteria type 1: Requires too many "lucky" neutral mutations. (from Behe IC v2)
* Criteria type 2: No possible ancestors. (modification of Dembski IC v2)
* Criteria type 3: Intelligent designer observed.
* etc...

Personally, I'd prefer that the "IC" label be dropped from the subsequent "redefinitions". I think clever people could invent other, more appropriately descriptive labels that would reflect the actual criteria being applied.

[ 24. May 2003, 12:09: Message edited by: Argon ]

Date: 2003/05/30 23:05:33, Link
Author: RBH
Nic wrote
Quote
...one thing that the story mentions that I hadn't really seen mentioned elsewhere (including the paper IIRC, so I'm a bit suspicious) is that the building up of the functions sometimes "burned its bridges", i.e. eventually no trace of some intermediates can be found in the final program.  Would this be an instance of scaffolding?  It would be cool if so...

I'll have to trudge back through the paper again, and probably through a lineage or two or three, to see if that was the case.  I know that the EETimes article had several serious errors in its description of the research, so much so that I emailed Johnson with them, and he asked if they were serious enough to warrant a Letter to the Editor with the corrections.  I said they were and did so - actually, I told him to use my original email as the correction letter.  This is the email I sent:
Quote
Dear Mr. Johnson,

It was good to see your Avida story in EETimes.  However, there are a couple of misconceptions in it that distort the actual research.

First, the story says "The organism's metabolism consists in the endless execution of the sequence of instructions. Energy from the environment, or "food," is modeled as single-instruction processors (SIPs) that are fed to the CPU. The number of SIPs that a CPU receives is proportional to the length of its tape. Thus, as the CPU becomes more complex in terms of the length of its instruction tape, it is able to get more food from the environment, giving more-complex organisms a competitive advantage."

"SIPs" are "Single Instruction Processing" units, not "processors."  A SIP is a quantum - unit - of 'energy' that allows processing one instruction.  It is not additional processors.  And providing SIPs in proportion to length does not give longer organisms a competitive advantage.  It actually neutralizes genome length as a selective variable.
>
Second, the story says "SIPs introduce new instructions to the CPU, allowing it to grow as well as to reproduce."  Nope.  SIPs have nothing at all to do with introducing new instructions.  That was the role of mutations.

Third, the story says "The researchers performed evolutionary runs starting with individuals that could replicate themselves but could not perform any logic operations except the simple NAND."

In fact, the initial organisms (Ancestors) could perform NO logic operations, not even NAND.  NAND was in the instruction set available to be inserted by mutations, but was not in the initial organisms.  Further, the primitive NAND instruction could not by itself perform NAND in the context of a critter's program: It had to be appropriately embedded in a context of other instructions that gave it access to registers and I/O.  By itself, NAND in a critter's genome just sits there.

Yours,
RBH

RBH

Date: 2003/06/06 23:08:20, Link
Author: RBH
I want to capture this before it gets Modded away on Brainstorms.  The second to last paragraph is the one vulnerable to Modding:
Quote
posted 06. June 2003 23:56

Argon,

The program above performs EQU, NOT, OR-N, OR, AND-N, and NOR. It does not perform NAND, AND, or XOR. It's difficult to say by inspecting the code which of those simpler functions are necessary components of the code that performs EQU, since in the evolution of the program various primitive instructions become involved in multiple functions. There's an example above - three instructions that are part of the replication code have been recruited to also participate in performing EQU. Thus individual instructions can have multiple roles, contributing to several functions, making program analysis difficult to nigh unto impossible.

There are other instances of this same kind of difficuly. For example, in the hardware evolution literature there was a flurry of publicity a year or so ago about a (hardware) radio receiver evolving under conditions that were selective for oscillation. In other outcomes of that same study, oscillators evolved as fairly simple circuits in the sense of using relatively few components of known properties that behaved appropriately under the selective conditions. The experimenters, experienced circuit designers themselves, could not figure out how the circuits did it. The outcomes of evolution in these kinds of studies are not simple, not obvious, and certainly not transparent in how they perform their behaviors.

Mike,

The principal function of the programs is to perform the logic operation EQU. All 23 programs (147 in the whole experiment, actually) evolved to perform EQU. The 23 that did so in the main condition are all different from one another. I don't know about the 124 that evolved in the various control conditions. As I noted above for the Case Study program, they all also performed some of the simpler logic operations, which are legitimately labeled "functions," too. But just as various kinds of bacteria evolved different sorts of flagella to perform the same principal function, motility, so the various Avida evolutionary runs evolved different programs to perform EQU.

Note, by the way, that once a program that performs EQU has evolved, the simpler logic operations can drop out of the competitive race - now be unrewarded - and the main function will persist in the population so long as it continues to be selectively advantageous. Way downstream, since they no longer confer selective advantage, those simpler functions may no longer be performed and thus won't be visible in the current population. We'd see no "precursors" of EQU in the extant population and we'd wonder where the heck our programs with the ability to perform EQU came from. And we could generate an "irreducibly complex programs that perform EQU can't evolve" conjecture, and we could challenge critics of our IC program conjecture to produce the exact pathway and tell us what those hypothetical precursors are. (And if they can't specify the precise pathway and the exact precursors, we could say "Neener neener neener!" ;) )

As to parts, in constructing your list of 14 you have conflated "part" and "kind of part." 'q' is a "kind of part;" 'q as instruction #11' is a "part." It's like a brick pillar made of a slew of bricks piled up one on top of another. All the bricks are identical - all are the same "kind of part" - but pulling out the brick on the bottom of the pillar has a considerably different effect than pulling out the brick on top. They are different "parts" in the context of the structure. I take "part" in the program above to be coterminous with "Instruction #." Thus 'q as Instruction #11' is a different part from 'q as Instruction #11' or 'q as Instruction #17'. So the example above contains 60 "parts." In a computer program, the same primitive instruction used in different places is most definitely not redundancy!

RBH

Date: 2003/07/23 00:04:47, Link
Author: RBH
Dembski's 'contains no actual biology' remark echoes Behe's remark in the Chronicle of Higher Education, where he was quoted as saying
Quote
But Michael J. Behe, a professor of biological sciences at Lehigh University who is one of the most vocal proponents of intelligent design, says that the simulation proves nothing. "If I were a Darwinist, I would be embarrassed for this paper to be published in Nature," he said.

"There's precious little real biology in this project," Mr. Behe said. For example, he said, the results might be more persuasive if the simulations had operated on genetic sequences rather than fictitious computer programs.

Dembski's comment about the Lenski, et al. study 'begging the point' because there were functional intermediates available in the simulation echoes some of the objections raised in the ISCID Literature Review Forum discussion of the paper.  There's no indication in his Introduction that Dembski learned anything from that discussion, though.

Finally, the inclusion of intermediates in the Lenski, et al., study was not an assumption or requirement; it was part of the experimental design.  They actually ran evolutionary runs with 38 different combinations of intermediates, including the extreme case of no simpler intermediates, the case with all 7 'simpler' functions (simpler than EQU), and 36 different conditions with one or a pair of the intermediates removed.  In 37 of the 38 conditions, lineages capable of performing the input-output mapping corresponding to EQU evolved; only in the condition in which there were no intermediates did the EQU mapping fail to appear in 50 runs.

RBH



Date: 2003/07/23 15:40:56, Link
Author: RBH
Dembski has started a thread on ARN on the irreducible complexity/Lenski, et al portion of his Introduction here.  I provided a cross-reference from that thread to yersinia's posting on ISCID's Literature Review Forum.

RBH

Date: 2004/02/21 13:23:11, Link
Author: RBH
Some coevolutionary EAs out in the (near-) real world:

A symbiotic evolutionary algorithm for the integration of process planning and job shop scheduling

Abstract:
Quote
This paper addresses the integrated problem of process planning and scheduling in job shop flexible manufacturing systems. Due to production flexibility, it is possible to generate many feasible process plans for each job. The two functions of process planning and scheduling are tightly interwoven with each other. The optimality of scheduling depends on the result of process planning. The integration of process planning and scheduling is therefore important for an efficient utilization of manufacturing resources. In this paper, a new method using an artificial intelligent search technique, called symbiotic evolutionary algorithm, is presented to handle the two functions at the same time. For the performance improvement of the algorithm, it is important to enhance population diversity and search efficiency. We adopt the strategies of localized interactions, steady-state reproduction, and random symbiotic partner selection. Efficient genetic representations and operator schemes are also considered. While designing the schemes, we take into account the features specific to each of process planning and scheduling problems. The performance of the proposed algorithm is compared with those of a traditional hierarchical approach and an existing cooperative coevolutionary algorithm. The experimental results show that the proposed algorithm outperforms the compared algorithms.


Multi-objective cooperative coevolution of artificial neural networks (multi-objective cooperative networks).

Abstract:
Quote
In this paper we present a cooperative coevolutive model for the evolution of neural network topology and weights, called MOBNET. MOBNET evolves subcomponents that must be combined in order to form a network, instead of whole networks. The problem of assigning credit to the subcomponents is approached as a multi-objective optimization task. The subcomponents in a cooperative coevolutive model must fulfill different criteria to be useful, these criteria usually conflict with each other. The problem of evaluating the fitness on an individual based on many criteria that must be optimized together can be approached as a multi-criteria optimization problems, so the methods from multi-objective optimization offer the most natural way to solve the problem.In this work we show how using several objectives for every subcomponent and evaluating its fitness as a multi-objective optimization problem, the performance of the model is highly competitive. MOBNET is compared with several standard methods of classification and with other neural network models in solving four real-world problems, and it shows the best overall performance of all classification methods applied. It also produces smaller networks when compared to other models.The basic idea underlying MOBNET is extensible to a more general model of coevolutionary computation, as none of its features are exclusive of neural networks design. There are many applications of cooperative coevolution that could benefit from the multiobjective optimization approach proposed in this paper.

and Applying Cooperative Coevolution  To Inventory Control Parameter Optimization.  Inventory control may not seem like a sexy application, but it's a large component of many firms' costs.

RBH

Date: 2005/01/26 19:30:04, Link
Author: RBH
In the 1974 edition of "Scientific Creationism" Morris did a calculation for the formation of a large organic molecule that was exactly parallel to Dembski's calculation for the flagellum.  I wrote an essay in the late '80s for the Ohio Committee of Correspondence on Evolution about it.  Dembski has provided no improvement (!;) on Morris's treatment.

Date: 2005/03/27 21:01:38, Link
Author: RBH
Since ARN's Moderation is twitchy and its archived threads occasionally disappear from view, I''m archiving two posts I made there on irreducible complexity.  This is the first.

In another thread jon_e provided a link to a recent paper by Dembski revisiting irreducible complexity.  jon_e was making the point that "irreducible complexity" is alive and well in ID.  I had previously scanned the paper but had not read it carefully.  Looking again at it tonight, I see that Dembski has made a significant change in Behe's original conception of irreducible complexity, a change that eviscerates the utility of "irreducible complexity."  Rather than being alive and well, in the light of Dembski's new paper irreducible complexity is dead on arrival.

To realize the nature of the change, it's first necessary to know what an "operational definition" is.  Very briefly, an operational definition is a description of the procedures (operations) used to measure the value of a variable.  So, for example, an operational definition of "temperature" is a description of how temperature is measured -- the apparatus used, conditions that apply, and steps performed in making the measurement.  The Methods section of research papers contain explicit or implicit operational definitions of the variables under study.

With respect to any system, "irreducible complexity" is a variable that takes one of two values, 1 or 0 -- present or absent, true or false.  So an operational definition of irreducible complexity is a description of the steps carried out to determine whether a given system is or is not IC.  In Behe's original conception, the IC value for a system is assigned to be "1" (true) if the loss of any part/element/component prevents the system from performing the primary function that it performs when it is whole -- a 'knock-out' operation -- and "0" (false) otherwise.  So Dembski wrote in 1998, two years after DBB    
Quote
Central to his [Behe's] argument is his notion of irreducible complexity. A system is irreducibly complex if it consists of several interrelated parts so that removing even one part completely destroys the system’s function.
and    
Quote
Also, whether a biochemical system is irreducibly complex is a fully empirical question: Individually knock out each protein constituting a biochemical system to determine whether function is lost. If so, we are dealing with an irreducibly complex system. Experiments of this sort are routine in biology.
The operation specified for determining IC is knock out a part and see if the system still works: that's the operational definition of "irreducible complexity".

In the recent paper referenced by jon_e, though, Dembski adds another operation to the procedure used to determine the value taken by IC:    
Quote
Thus, removing parts, even a single part, from the irreducible core results in complete loss of the system’s basic function. Nevertheless, to determine whether a system is irreducibly complex, it is not enough simply to identify those parts whose removal renders the basic function unrecoverable from the remaining parts. To be sure, identifying such indispensable parts is an important step for determining irreducible complexity in practice. But it is not sufficient. Additionally, we need to establish that no simpler system achieves the same basic function.  (Emphasis added)
and    
Quote
To determine whether a system is irreducibly complex therefore employs two approaches: (1) An empirical analysis of the system that by removing parts (individually and in groups) and then by rearranging and adapting remaining parts determines whether the basic function can be recovered among those remaining parts. (2) A conceptual analysis of the system, and specifically of those parts whose removal renders the basic function unrecoverable, to demonstrate that no system with (substantially) fewer parts exhibits the basic function. (Emphases added)
That last criterion is an IC killer, at least empirically.  One must show that no system that is simpler than the system under analysis can perform the function performed by the system under analysis.  (I'm leaving aside the other change, the "rearranging and adapting remaining parts" addition to the original knockout operation.  That change also raises problems for determining the value taken by IC.)

Note carefully that it's not sufficient to show that some subsystem of the system under analysis can't perform the function; according to Dembski it is necessary to show that no simpler system can perform it, regardless of whether that simpler system resembles the system under analysis or not.

It might be thought that I'm giving Dembski's words an uncharitable reading, but that's belied by Dembski's own example, sandwiched between the two quotations above:    
Quote
Consider, for instance, a three-legged stool. Suppose the stool’s basic function is to provide a seat by means of a raised platform. In that case each of the legs is indispensable for achieving this basic function (remove any leg and the basic function can’t be recovered among the remaining parts). Nevertheless, because it’s possible for a much simpler system to exhibit this basic function (for example, a solid block), the three-legged stool is not irreducibly complex.
Please pause and think about that for a moment.  

Now continue reading.

On Behe's original operational definition, that three-legged stool is irreducibly complex: remove any of the four components (three legs and the seat -- Dembski forgot to mention the seat) and it can no longer function "to provide a seat by means of a raised platform", and so on the original operational definition the stool is irreducibly complex.  But under Dembski's revised operational definition, a three-legged stool is not irreducibly complex because some simpler system (that does not contain any of the parts of the original stool) can perform that same function.

As a result, in order to show that a system is IC, intelligent design "theorists" must show not only that the system fails to perform its function when any part is removed, but they must also show that no other simpler system can perform that function.  That is, they must establish a universal negative.  And (ask your friendly neighborhood logician) it is impossible to establish a universal negative.  (Hint: black swans.)  Dembski is back in the inductive soup.  On Dembski's new operational definition, not even Behe's mousetrap is irreducibly complex!

In my less-than-humble opinion, in revising its operational definition Dembski has thoroughly gutted the notion of irreducible complexity.

RBH



Date: 2005/03/27 21:06:05, Link
Author: RBH
This is the second of the archived posts.

I remarked above that "rearranging and adapting remaining parts" was still lurking.  Let me bring it out of the shadows.

Besides 'no other simpler system", Dembski's second addition to the operational definition of irreducible complexity is
Quote
... by rearranging and adapting remaining parts determine ... whether the basic function can be recovered among those remaining parts.
 As Dembski describes it, we have knocked out a part and found that the system's "basic function" is gone.  Now we must establish that the remaining parts cannot be rearranged and/or adapted to restore the basic function.  I've been trying to think of a good analogy, and I think I have it: a simple household mousetrap.

Behe argues in Darwin's Black Box that a common mousetrap is irreducibly complex on his original definition.  It consists of five parts (hammer, spring, catch, holding bar, and platform).  Remove any one of them and the basic function of the mousetrap is gone.  While critics have argued that a mousetrap is not irreducibly complex, Behe has vigorously defended it, mostly by arguing that rearrangements and adaptations are necessary to get the simpler traps the critics described.  Poor Behe, betrayed by his comrade in arms.

Now, alas, the mousetrap falls prey to Dembski's second test, for it is clear that one can remove any one of the five parts and with some modest rearranging and adapting of the remaining parts recover the basic function.  See here for animations of mousetraps employing one, two, three, and four of the parts of the complete mousetrap.

The only component common to all of the reduced and adapted mousetraps is a piece of wire.  Any of the other parts can be eliminated and with suitable rearrangements and adaptations we can recover the basic function.  Now consider testing an intact mousetrap to see if it's IC.  In the first step of our analysis we knock out any component.  As the illustrations show, we can "rearrange and adapt" the remaining parts to recover the basic function.  Knock out any one of the parts and the remainder are sufficient, with some rearranging and adapting, to serve the basic function.  (Some of the "adaptations" -- or perhaps they're "rearrangements" -- would actually eliminate some of the remaining parts.)  The simpler system may not function real well, perhaps, but of course evolution doesn't need a whole lot of relative advantage to build on.  And the criterion for IC is elimination of the "basic function," not merely diminution or attenuation.

What does this mean?  It means that by Dembski's operational criteria for irreducible complexity, even Behe's mousetrap, the iconic example of irreducible complexity, is not IC.  

I'll say it again: Dembski has eviscerated irreducible complexity as an empirical marker of design, and in the process has cut his colleague off at the knees.

RBH



Date: 2005/03/27 21:10:48, Link
Author: RBH
Also for the record, this is Dembski's sole response:
Quote
IC will be around for a long time. The nonsimplifiability criterion that I introduce is not nearly as onerous as RBH makes out. True, nonsimplifiability, as I apply it to IC, says that no simplification is possible for a system performing the basic function. But basic function, as I define it, also includes the way in which the function is performed. Thus, it is no simplification of the bacterial flagellum to substitute a paddle, say, that doesn't spin, that propels the bacterium through its watery environment, and that is simpler. Any simplification of the bacterial flagellum would have to be a bidirectional motor-driven propeller. If there's a concession in my treatment of IC with the nonsimplfiability criterion, it is more than made up for in requiring IC only for irreducible cores. Irreducible cores extend IC to many systems that Behe's original definition did not cover. It is often easy to show that cores are nonsimplfiable even if the apparatus as a whole isn't.

Date: 2005/03/28 21:31:21, Link
Author: RBH
Ed Brayton has some remarks on a similar challenge to Dawkins, with a link to some of their correspondence with Dawkins in his 'Why I Won't Debate Creationists" article.

Brayton also has the text of a letter from Priest to a geologist informing the geologist that he, like Wesley, is in default.

Priest is active in West Virginia's anti-evolution movement when he's not being Mastro's secretary.

RBH



Date: 2005/05/19 00:36:06, Link
Author: RBH
I'll briefly address one of Samada's questions.  He asked
Quote
2. Does RBH wish to improve upon intelligent design theory by remaining inside of its current (morphing-big tent) formulations or does he need something conceptually outside of what ID theorists have thus far suggested if MDT is to be successful? Is MDT formulated basically as anti-ID?
Since there is at present no existing "big tent" ID theory, and since ID "theorists" have thus far not suggested anything by way of explanations for purportedly designed objects, and since ID "theorists" thus far not even developed a minimal catalog of purportedly designed structures and/or processes on which to begin to attempt systematics, there's no ID "theory" to be part of.  About all ID "theorists" have in common is that they refer to a single designing agency.  No other properties of that intelligent agency are specified or even conjectured; indeed, Dembski rules out conjectures about the intelligent agency.

I offered an alternative formulation that accommodates a good deal of existing biological data much more comfortably than the single-designer conjecture, generates a real research program, and provides a genuine alternative for school boards and legislatures, because it actually does something besides throw stones at naturalistic evolutionary theory: it offers testable hypotheses, an explanatory structure, and (not incidentally) escapes First Amendment Establishment constraints.  I commend it to Kansas's attention.

RBH

Date: 2005/05/19 00:53:06, Link
Author: RBH
To circle back to the main theme of this thread, I suggest that participants read Multiple Designers Theory.  While it may not be to Finley's taste, I did briefly discuss issues associated with some properties of designed objects that might be expected to flow from the fact that designers vary in their knowledge, skills, and abilities.

RBH

Date: 2005/07/10 16:37:09, Link
Author: RBH
I have no clear idea why they might be linked, but the links go back more than a decade.  Phillip Johnson was a founding member of The Group for the Scientific Reappraisal of the HIV-AIDS Hypothesis, while Wells signed on by 1993.  Wells, of course, is (or was) associated with Duesberg, a Berkeley molecular biologist and a primary academic pusher of HIV denial.  Interestingly, Johnson (!991) predates Duesberg (1993) as a signer of that group's statement.  Tom Bethell signed on in 1993, too.

Dr. G.H. had a post last year on Pandas Thumb on HIV denial, Holocaust denial, and ID.

RBH

Date: 2005/11/01 16:56:59, Link
Author: RBH
It was originally a news story carried on Yahoo news, on a page that is no longer available.  The full context is given here: http://atheism.about.com/b/a/160018.htm.

RBH

Date: 2006/03/28 10:51:37, Link
Author: RBH
Apropos of Wesley's remark
Quote
but rather political parties and industries whose platforms and profit margins are threatened by scientific research into things like global warming, management and recovery of endangered species, health effects of industrial products, and ecology.
Deborah Owens-Fink, a primary pusher of ID Creationism on the Ohio State BOE, mentioned global warming as another topic that deserves (her brand of) "critical analysis" in the Ohio standards.

Not incidentally, she has also tied defending good science to atheism and liberalism.  She's a walking talking source of ammo for any suit that might be filed in Ohio.

RBH

Date: 2007/01/26 19:16:27, Link
Author: RBH
Cornelius wrote    
Quote
How is it that similarities such as the pentadactyl pattern are such powerful evidence for evolution, in light of equala and greater levels of similarity in distant species, such as dsplayed in the marsupial and placental wolves? Please look at the very bottom here:

PBS coloring book

Then look at here:

Wikipedia figure

And then consider my question, and explain why similarities such as the pentadactyl pattern are such powerful evidence.  (RBH: Fixed the raw urls to create links)
That first one is the coloring book again.  The second is an illustration of the fact that many (though not all) tetrapods have five 'fingers'.  Someone upthread was right: Cornelius really doesn't know the difference between homoplasy and homology.

RBH

Date: 2007/05/27 13:56:26, Link
Author: RBH
Quote (Arden Chatfield @ May 26 2007,22:06)
Quote (stevestory @ May 26 2007,16:46)
http://scienceblogs.com/strangerfruit/loldembski.jpg


We could start a whole thread just about Dembski's decision to wear that sweater.  :O

The sweater's pretty clearly a faux academic robe.

Date: 2007/07/30 21:14:52, Link
Author: RBH
Let me add the banning of Febble, a Ph.D. neuroscientist and theist, who argued that based on Dembski's definition of intelligence, the process of random mutations and natural selection is an intelligent process.  DaveTard banned her saying
Quote
febble is no longer with us - anyone who doesn’t understand how natural selection works to conserve (or not) genomic information yet insists on writing long winded anti-ID comments filled with errors due to lack of understanding of the basics is just not a constructive member - good luck on your next blog febble

Date: 2008/02/28 11:52:52, Link
Author: RBH
Quote (Cubist @ Feb. 28 2008,08:04)
   
Quote (Erasmus @ FCD,Feb. 27 2008,08:46)
     
Quote
The trouble is, nobody else in the ID movement seems to know, either!


Cubist, I'd say that they do know.  And I'd agree with them partway (separating for the moment the messengers).  If you take what some of these demonstrated liars say at face value, their claim is that sometimes we can analytically deduce some property of some features of some objects as being 'designed' by some agents.

That's the ID claim, yes. But that claim is false. Consider that the class of "Designed entities" covers everything from a ham sandwich to a performance of Beethoven's Fifth Symphony to a F-18 fighter jet; exactly what 'signature of Design' do all Designed entities share in common? Hell, what 'signature of Design' can all Designed entities share in common? Thus, looking for Design is a fool's game. So what real scientists do is, they note that every Designed object known to Man must necessarily have been manufactured, and therefore, real scientists look for signs of Manufacture. And if they find signs of Manufacture, that how they know whatever-it-was is a Designed entity.
ID, of course, is absolutely silent on the question of how the Designer implemented His/Her/Its/Their Design(s)...

I'll repeat what I've posted many times over the years.  The "theory" of ID is this:    
Quote
Sometime or other, something(s) or other designed something or other, and then somehow or other manufactured that thing in matter and energy, all this occurring while leaving no independent evidence of the design process or the manufacturing process, and while providing no independent evidence for the presence, or even the existence, of the designing and manufacturing agent(s).
Now a question for Kevin is whether he can fill in even one of the placeholders (some       ) in that statement.  My bet is no.



Date: 2010/03/16 16:59:16, Link
Author: RBH
Cubist wrote  
Quote
... as far as I can tell, ID can be accurately (albeit cruelly) summarized in seven words: Somehow, somewhere, somewhen, somebody intelligent did something.
I have to demur: It's worse than that.  My wording, asked a number of times in various venues, is this:

Sometime or other, some intelligent agent(s) designed something or other, and then somehow or other manufactured that designed thing in matter and energy, all the while leaving no independent evidence of the design process or the manufacturing process, and leaving no independent evidence of the presence, or even the existence, of the designing and manufacturing agent(s).

If BJRay can fill in any of the placeholders in that statement, with evidence appended, I'd be grateful.



Date: 2011/01/25 09:37:09, Link
Author: RBH
This is the place to continue the PT thread on Atheistoclast's strange notions of information theory and evolution.

 

 

 

=====