Creation Questions

Tag: dna

  • The Irreducibility of Life

    The Irreducibility of Life

    In his paper “Life Transcending Physics and Chemistry,” Michael Polanyi examines biological machines in a way that illuminates the explanatory failures of materialism. The prevailing materialist paradigm that life can be fully explained by the laws of inanimate nature fails to account for higher ordered realities which have operations and structures that involve non-material judgements and interpretations. He specifically addresses the views of scientists such as Francis Crick, who, along with James Watson, argued for a total reductionist and nominalist view based on their discovery of DNA. For Polanyi, there is a life-transcending nature that all biological organisms have which is akin to machines and their transcendent properties. His central argument is based on the concept of “boundary control,” which argues the notion that there are laws that govern physical reactions (as Crick would accept) yet there are particular laws of form and function which are unique and separate from those lower-level laws.

    There is a real clash between Polanyi’s position and the reductionist/nominalist position which is commonly held by molecular biologists. To start to broach this divergence he explains how the contemporaneous discovery of the genetic function of DNA was interpreted as the final blow to vitalist thought within sciences. He writes:

    “The discovery by Watson and Crick of the genetic function of DNA (deoxyribonucleic acid), combined with the evidence these scientists provided for the self-duplication of DNA, is widely held to prove that living beings can be interpreted, at least in principles, by the laws of physics and chemistry.”

     Polanyi explicitly rejects Crick’s interpretation; that position is of the mainstream and popular level academia. Crick states that his principle “has so far been accepted by few biologists and has been sharply rejected by Francis Crick, who is convinced that all life can be ultimately accounted for by the laws of inanimate nature.” This same sentiment can indeed be found in Crick’s book “Molecules and Man.” Crick writes the following:

    “Thus eventually one may hope to have the whole of biology “explained” in terms of the level below it, and so on right down to the atomic level.”

    To dismantle the materialist argument, Polanyi utilizes the analogy of a machine. A machine cannot be defined or understood solely through the physical and chemical properties of its materials. Take a watch and put it into a machine that can read a detailed atomic map of the device: can even the best chemist give any coherent reason as to whether the watch is functioning or not? Worse—can one even tell you what a watch is, if all that exists is matter in motion for no particular reason? Polanyi writes it best:

    “A complete physical-chemical topography of my watch—even though the topography included the changes caused by the movements in the watch—would not tell us what this object is. On the other hand, if we know watches, we would recognize an object as a watch by a description of it which says that it tells the time of the day… We know watches and can describe one only in terms like ‘telling the time,’ ‘hands,’ ‘face,’ ‘marked,’ which are all incapable of being expressed by the variables of physics, length, mass, and time.”

    Once you see this distinction, you are invariably led (as Polanyi was) to two unique substratum of explanation; what he calls the concept of dual control. Obviously, there are physical laws which dictate constraints and operations of all material and all material things can be explained by these very laws. However, those laws are only meaningfully called constraints when there is some notion of intention or design to be constrained. The shape of any machine, man-made or biological, is not determined by natural laws. Not only is it not determined by them, it cannot be determined by them in any way. Polanyi elaborates on this relationship:

    “The machine is a machine by having been built and being then controlled according to principles of engineering. The laws of physics and chemistry are indifferent to these principles; they would go on working in the fragments of the machine if it were smashed. But they serve the machine while it lasts; machines rely for their operations always on the laws of physics and chemistry.”

    As I hinted at before, Polanyi also applies this logic to biological systems, arguing that morphology is a boundary condition in the same way that a design of a machine is a boundary condition. Biology cannot be reduced to physics because the structure that defines a living being is not the result of physical-chemical equilibration. Physical laws do not intend to create nor do they care that anything functions. Instead, “biological principles are seen then to control the boundary conditions within which the forces of physics and chemistry carry on the business of life.”

    Where Polanyi and Crick truly have the disagreement, then, is in their interpretation of the explanatory power of nature and how DNA is implicated within these frameworks. While Crick views DNA as a chemical agent that proves reducibility, Polanyi argues that the very nature of DNA as an information carrier proves the opposite. For a molecule to function as a code, its sequence cannot be determined by chemical necessity. If chemical laws dictated the arrangement of the DNA molecule, it would be a rigid crystal incapable of conveying complex, variable information. Polanyi writes:

    “Thus in an ideal code, all alternative sequences being equally probable, its sequence is unaffected by chemical laws, and is an arithmetical or geometrical design, not explicable in chemical terms.”

    By treating DNA as a transmitter of information, Polanyi aligns it with other non-physical forms of communication, such as a book. The physical chemistry of the ink and paper does not explain the content of the text. Similarly, the chemical properties of DNA do not explain the genetic information it carries. Polanyi contends that Crick’s own theory inadvertently supports this non-materialist conclusion:

    “The theory of Crick and Watson, that four alternative substituents lining a DNA chain convey an amount of information approximating that of the total number of such possible configurations, amounts to saying that the particular alignment present in a DNA molecule is not determined by chemical forces.”

    Therefore, the pattern of the organism, derived from the information in DNA, represents a constraint that physics cannot explain. It is a boundary condition that harnesses matter. Polanyi concludes that the organization of life is a specific, highly improbable configuration that transcends the laws governing its atomic constituents:

    “When this structure reappears in an organism, it is a configuration of particles that typifies a living being and serves its functions; at the same time, this configuration is a member of a large group of equally probable (and mostly meaningless) configurations. Such a highly improbable arrangement of particles is not shaped by the forces of physics or chemistry. It constitutes a boundary condition, which as such transcends the laws of physics and chemistry.”

    In this way, Polanyi refutes the nominalist materialist perspective by demonstrating that the governing principles of life—its form, function, and information content—are logically distinct from, and irreducible to, the physical laws that govern inanimate matter. Physical laws are, then, merely a piece of the puzzle of the explanation. What’s more, they are insufficient to account for the existence of particular organizations of matter which physical laws and chemistry are not determinative of.

  • Mutation is not Creation

    Mutation is not Creation

    Evolution is certainly a tricky word.

    For a creationist, it’s clear as day why. There are two equivocal definitions being used which blur the lines and convolute any attempt at productive dialogue.

    “Change in allele frequencies in a population over time.”

    The breakdown: Alleles represent versions of genes in which a part of the gene is different, which often makes the overall functional outcome in some way different.

    The frequencies in a population are the ratio of members with or without an allele.

    Finally, the premise of this definition is that the number of organisms in a population with a certain trait can grow or diminish over time.

    This seems to me a very uncontroversial thing to hold to. Insofar as evolution could be a fact, this is certainly hard to deny.

    All that is needed for this first definition is mechanisms for sorting and redistribution of existing variation.

    However, what is commonly inferred from the term is an altogether separate conception:

    “All living things are descended from a common ancestor.”

    This is clearly different. An evolutionist may agree, but argue that these are merely differences in degree (or scale). But is that the case?

    The only way to know whether the one definition flows seamlessly into the next or whether this is a true equivocation is to understand the underlying mechanism. For instance, let’s talk about movement.

    South America and Asia are roughly four times further apart than Australia and Antarctica. Yet, I could say, rightly, that I could walk from South America to Asia, but I could not say the same about Australia and Antarctica. Why is this? If I can walk four times the distance in one instance, why should I be thus restricted?

    The obvious reason is this: Australia and Antarctica are separated by the entire width of the deep, open Southern Ocean and the Tasman Sea. I should not expect that I can traverse, by walking, two places with no land betwixt them.

    The takeaway is this: My extrapolation is only good so long as my mechanism is sufficient. Walking is only possible with land bridges. Without land bridges, it doesn’t matter the distance; you’re not going to make it.

    This second definition requires mechanisms for sorting and redistribution of existing variation as well as creation of new biological information and structures.

    With that consideration, let us now take this lesson and apply it to the mechanisms of change which evolutionists espouse.

    There are many, but we will quickly narrow our search.

    Natural Selection: This is any process that acts as a culling from the environment (which can be ecology, climate, niche, etc).

    Gene Flow: This is the reproductive isolation of populations.

    Genetic Drift/Draft: This is any process that causes fluctuations in alleles due to a lack of selection pressures.

    Sexual Selection/Non-Random Mating: This is the process by which organisms preferentially choose phenotypes.

    The point of this exercise is to observe that these are all mechanisms of sorting and redistribution of existing variation, but they are not the mechanisms that create that variation in the first place. Any mechanism that lacks creative power is insufficient to account for our second definition.

    The mechanism that is left is, you might have guessed, mutation.

    Here’s the problem: mutation is its own conflation. We need to unravel the many ways in which DNA can change. There are many kinds of mutations, and what’s true for one may not be true for another. For example, it is often said that mutations are:

    1. Copying errors
    2. Creative
    3. Random with respect to fitness

    However, this is hardly the case for many various types of phenomena that are classified as mutations.

    For instance, take recombination.

    Recombination is not a copy error. It is a very particular and facilitated meiotic process that requires deliberate attention and agency.

    Recombination is not creative. Although it can technically cause a change in allele frequencies (as a new genotype is being created), so can every other non-creative process. It can no more create new genetic material than a card shuffler can create new cards.

    Recombination is not random with respect to fitness. Even with recombination, like a card shuffler, being random in one sense, there is a telos about particular random processes that make them constitute something not altogether random. If we take a card shuffler, it is not random with respect to the “fitness” of the card game. In fact, it is specifically designed to make a fairer and balanced game night. Likewise, recombination, particularly homologous recombination (HR), is fundamentally a high-fidelity DNA repair pathway. It is designed to prevent the uninterrupted spread of broken or worse genes within a single genotype. Like the card shuffler, the mechanism of recombination has no foresight, but it has an explicit function nonetheless.

    Besides recombination, there are many discrete ways in which mutations can happen. On the small scale, we see things like Single Nucleotide Polymorphisms (SNPs) and Insertions and Deletions (Indels). Zooming out, we also find mutations such as duplications & deletions of genes or multiple genes (e.g., CNVs), exon shuffling, and transposable elements. On the grand scale, we see events such as whole genome duplications and epigenetic modifications as well.

    On the small scale, Single Nucleotide Polymorphisms (SNPs) and Insertions and Deletions (Indels) are the equivalent of typos or missing characters within an existing blueprint. While a typo can certainly change the meaning of a sentence, it cannot generate a completely new architectural plan. It modifies the existing instruction set; it does not introduce a novel concept or function absent in the original text. These are powerful modifiers, but their action is always upon pre-existing information.

    It is also the case that these mutations can never rightly be called evolution. They are not creative; they are only destructive mechanisms. Copy errors create noise, not clarity, in information systems.

    Further, these small-scale mutations happen within the context of the preexisting structure and integrity of the genome. So that, even those which are said to be beneficial are preordained to be so by some higher design principles. For instance, much work has been done to show that nucleosomes protect DNA from damage and structural variants stabilize regions where they emerge:

    “Structural variants (SVs) tend to stabilize regions in which they emerge, with the effect most pronounced for pathogenic SVs. In contrast, the effects of chromothripsis are seen across regions less prone to breakages. We find that viral integration may bring genome fragility, particularly for cancer-associated viruses.” (Pflughaupt et al.)

    “Eukaryotic DNA is organized in nucleosomes, which package DNA and regulate its accessibility to transcription, replication, recombination, and repair… living cells nucleosomes protect DNA from high-energy radiation and reactive oxygen species.” (Brambilla et al.)

    Moving to the medium scale, consider duplications and deletions (CNVs) and exon shuffling. Gene duplication, often cited as a source of novelty, is simply copying an entire, functional module—a paragraph or even a full chapter. This provides redundancy. It is often supposed that this allows one copy to drift while the original performs its necessary task. But gene duplications are not simply ignored by the genome or selective processes. They are often immediately discarded if they don’t infer a use, or otherwise, they are incorporated in a certain way.

    “Gene family members may have common non-random patterns of origin that recur independently in different evolutionary lineages (such as monocots and dicots, studied here), and that such patterns may result from specific biological functions and evolutionary needs.” (Wang et al.)

    Here we see that there is often a causal link between the needs of the organism and the duplication event itself. Further, we observe a highly selective process of monitoring post-duplication:

    “Recently, a nonrandom process of gene loss after these different polyploidy events has been postulated [12,31,38]. Maere et al. [12] have shown that gene decay rates following duplication differ considerably between different functional classes of genes, indicating that the fate of a duplicated gene largely depends on its function.” (Casneuf et al.)

    Even if the function conferred was redundancy, redundancy is not creation; it is merely an insurance policy for existing information. Where, precisely, is the mechanism that takes that redundant copy and molds it into a fundamentally new structure or process—say, turning a light-sensing pigment gene into a clotting factor? What is the search space that will have to be traversed? Indels and SNPs are not sufficient to modify a duplication into something entirely novel. Novel genes require novel sequences for coding specific proteins and novel sequences for regulation. Duplication at best provides a scratch pad, which is highly sensitive to being tampered with.

    Exon shuffling, similarly, is a process of reorganization, splicing together pre-existing functional protein domains. This is the biological equivalent of an editor cutting and pasting sentences from one section into another. The result can be a new combination, but every word and grammatical rule was already present. It is the sorting and redistribution of parts.

    Further, exon shuffling is a highly regulated process that has been shown to be constrained by splice frame rules and mediated by TEs in introns.

    “Exon shuffling follows certain splice frame rules. Introns can interrupt the reading frame of a gene by inserting a sequence between two consecutive codons (phase 0 introns), between the first and second nucleotide of a codon (phase 1 introns), or between the second and third nucleotide of a codon (phase 2 introns).” (Wikipedia Contributors)

    This Wikipedia article gives you a taste for the precision and intense regulation, prerequisite and premeditated, in order to perform what are essentially surgical operations to create specialized proteins for cellular operation. One of the reasons it is such a delicate process is portrayed in this journal article:

    “Successful shuffling requires that the domain in question is bordered by introns that are of the same phase, that is, that the domain is symmetrical in accordance with the phase-compatibility rules of exon shuffling (Patthy 1999b), because shuffling of asymmetrical exons/domains will result in a shift of the reading frame in the downstream exons of recipient genes.” (Kaessmann)

    In the same way, transposable elements are constrained by the epigenetic and structural goings-on of the genome. Research shows that transposase recognizes DNA structure at insertion sites, and there are physical constraints caused by chromatin:

    “We show that all four of these measures of DNA structure deviate significantly from random at P element insertion sites. Our results argue that the donor DNA and transposase complex performing P element integration may recognize a structural feature of the target DNA.” (Liao Gc et al.)

    Finally, we look at the grand scale. Whole Genome Duplication (WGD) is the ultimate copy-paste—duplicating the entire instructional library. Again, this provides massive redundancy but offers zero novel genetic information. This is not creative in any meaningful sense, even at the largest scale.

    As for epigenetic modifications, these are critical regulatory mechanisms that determine when and how existing genes are expressed. They are the rheostats and switches of the cell, changing the output and timing without ever altering the source code (the DNA sequence). They are regulatory, not informational creators.

    The central issue remains: The second definition of evolution requires the creation of new organizational blueprints and entirely novel biological functions.

    The mechanism of change relied upon—mutation—is, across all its various types, fundamentally a system of copying, modification, deletion, shuffling, or regulation of existing, functional genetic information. None of these phenomena, regardless of their scale, demonstrates the capacity to generate the required novel information (the “land bridge”) necessary to traverse the vast gap between one kind of organism and another. Again, they are really great mechanisms for change over time, but they are pitiable creative mechanisms.

    Therefore, the argument that the two definitions of evolution are merely differences of scale falls apart. The extrapolation from observing a shift in coat color frequency (Definition 1) to positing a common ancestor for all life (Definition 2) is logically insufficient. It requires a creative mechanism that is qualitatively different from the mechanisms of sorting and modification we observe. Lacking that demonstrated, information-generating mechanism, we are left with two equivocal terms, where one is an undeniable fact of variation and the other is an unsupported inference of mechanism—a proposal to walk across the deep, open ocean with only the capacity to walk on land.

    Works Cited

    Brambilla, Francesca, et al. “Nucleosomes Effectively Shield DNA from Radiation Damage in Living Cells.” Nucleic Acids Research, vol. 48, no. 16, 10 July 2020, pp. 8993–9006, pmc.ncbi.nlm.nih.gov/articles/PMC7498322/, https://doi.org/10.1093/nar/gkaa613. Accessed 30 Oct. 2025.

    Casneuf, Tineke, et al. “Nonrandom Divergence of Gene Expression Following Gene and Genome Duplications in the Flowering Plant Arabidopsis Thaliana.” Genome Biology, vol. 7, no. 2, 2006, p. R13, https://doi.org/10.1186/gb-2006-7-2-r13. Accessed 7 Sept. 2021.

    Kaessmann, H. “Signatures of Domain Shuffling in the Human Genome.” Genome Research, vol. 12, no. 11, 1 Nov. 2002, pp. 1642–1650, https://doi.org/10.1101/gr.520702. Accessed 16 Jan. 2020.

    Liao Gc, et al. “Insertion Site Preferences of the P Transposable Element in Drosophila Melanogaster.Proceedings of the National Academy of Sciences of the United States of America, vol. 97, no. 7, 14 Mar. 2000, pp. 3347–3351, https://doi.org/10.1073/pnas.97.7.3347. Accessed 1 Dec. 2023.

    Pflughaupt, Patrick, et al. “Towards the Genomic Sequence Code of DNA Fragility for Machine Learning.” Nucleic Acids Research, vol. 52, no. 21, 23 Oct. 2024, pp. 12798–12816, https://doi.org/10.1093/nar/gkae914. Accessed 8 Nov. 2025.

    Wang, Yupeng, et al. “Modes of Gene Duplication Contribute Differently to Genetic Novelty and Redundancy, but Show Parallels across Divergent Angiosperms.” PLoS ONE, vol. 6, no. 12, 2 Dec. 2011, p. e28150, https://doi.org/10.1371/journal.pone.0028150. Accessed 20 Dec. 2021.

    Wikipedia Contributors. “Exon Shuffling.” Wikipedia, Wikimedia Foundation, 31 Oct. 2025, en.wikipedia.org/wiki/Exon_shuffling.

  • An Argument for Agent Causation in the Origin of DNA’s Information

    An Argument for Agent Causation in the Origin of DNA’s Information

    NOTE: This is a design argument inspired by Stephen Meyer‘s design argument from DNA. Importantly, specified complexity is changed for semiotic code (which I feel is more precise) and intelligent design is changed to agent causation (which is more preferencial).

    This argument posits that the very nature of the information encoded in DNA, specifically its structure as a semiotic code, necessitates an intelligent cause in its origin. The argument proceeds by establishing two key premises: first, that semiotic codes inherently require intelligent (agent) causation for their creation, and second, that DNA functions as a semiotic code.

    Premise 1: The Creation of a Semiotic Code Requires Agent Causation (Intelligence)

    A semiotic code is a system designed for conveying meaning through the use of signs. At its core, a semiotic code establishes a relationship between a signifier (the form the sign takes, e.g., a word, a symbol, a sequence) and a signified (the concept or meaning represented). Crucially, in a semiotic code, this relationship is arbitrary or conventional, not based on inherent physical or chemical causation between the signifier and the signified. This requires an interpretive framework – a set of rules or a system – that is independent of the physical properties of the signifier itself, providing the means to encode and decode the meaning. The meaning resides not in the physical signal, but in its interpretation according to the established code.

    Consider examples like human language, musical notation, or traffic signals. The sound “stop” or the sequence of letters S-T-O-P has no inherent physical property that forces a vehicle to cease motion. A red light does not chemically or physically cause a car to stop; it is a conventionally assigned symbol that, within a shared interpretive framework (traffic laws and driver understanding), signifies a command to stop. This is distinct from a natural sign, such as smoke indicating fire. In this case, the relationship between smoke and fire is one of direct, necessary physical causation (combustion produces smoke). While an observer can interpret smoke as a sign of fire, the connection itself is a product of natural laws, existing independently of any imposed code or interpretive framework.

    The capacity to create and utilize a system where arbitrary symbols reliably and purposefully convey specific meanings requires more than just physical processes. It requires the ability to:

    Conceive of a goal: To transfer specific information or instruct an action.

    Establish arbitrary conventions: To assign meaning to a form (signifier) where no inherent physical link exists to the meaning (signified).

    Design an interpretive framework: To build or establish a system of rules or machinery that can reliably encode and decode these arbitrary relationships.

    Implement this system for goal-directed action: To use the code and framework to achieve the initial goal of information transfer and subsequent action based on that information.

    This capacity to establish arbitrary, rule-governed relationships for the purpose of communication and control is what we define as intelligence in this context. The creation of a semiotic code is an act of imposing abstract order and meaning onto physical elements according to a plan or intention. Such an act requires agent causation – causation originating from an entity capable of intentionality, symbolic representation, and the design of systems that operate based on abstract rules, rather than solely from the necessary interactions of physical forces (event causation).

    Purely natural, undirected physical processes can produce complex patterns and structures driven by energy gradients, chemical affinities, or physical laws (like crystal formation, which is a direct physical consequence of electrochemical forces and molecular structure, lacking arbitrary convention, an independent interpretive framework, or symbolic representation). However, they lack the capacity to establish arbitrary conventions where the link between form and meaning is not physically determined, nor can they spontaneously generate an interpretive framework that operates based on such non-physical rules for goal-directed purposes. Therefore, the existence of a semiotic code, characterized by arbitrary signifier-signified links and an independent interpretive framework for goal-directed information transfer, provides compelling evidence for the involvement of intelligence in its origin.

    Premise 2: DNA Functions as a Semiotic Code

    The genetic code within DNA exhibits the key characteristics of a semiotic code as defined above. Sequences of nucleotides (specifically, codons on mRNA) act as signifiers. The signifieds are specific amino acids, which are the building blocks of proteins.

    Crucially, the relationship between a codon sequence and the amino acid it specifies is not one of direct chemical causation. A codon (e.g., AUG) does not chemically synthesize or form the amino acid methionine through a direct physical reaction dictated by the codon’s molecular structure alone. Amino acid synthesis occurs through entirely separate biochemical pathways involving dedicated enzymes.

    Instead, the codon serves as a symbolic signal that is interpreted by the complex cellular machinery of protein synthesis – the ribosomes, transfer RNAs (tRNAs), and aminoacyl-tRNA synthetases. This machinery constitutes the interpretive framework.

    Here’s how it functions as a semiotic framework:

    • Arbitrary/Conventional Relationship: The specific assignment of a codon triplet to a particular amino acid is largely a matter of convention. While there might be some historical or biochemical reasons that biased the code’s evolution, the evidence from synthetic biology, where scientists have successfully engineered bacteria with different codon-amino acid assignments, demonstrates that the relationship is not one of necessary physical linkage but of an established (and in this case, artificially modified) rule or convention. Different codon assignments could work, but the system functions because the cellular machinery reliably follows the established rules of the genetic code.
    • Independent Interpretive Framework: The translation machinery (ribosome, tRNAs, synthetases) is a complex system that reads the mRNA sequence (signifier) and brings the correct amino acid (signified) to the growing protein chain, according to the rules encoded in the structure and function of the tRNAs and synthetases. The meaning (“add this amino acid now”) is not inherent in the chemical properties of the codon itself but resides in how the interpretive machinery is designed to react to that codon. This machinery operates independently of direct physical causation by the codon itself to create the amino acid; it interprets the codon as an instruction within the system’s logic.
    • Symbolic Representation: The codon stands for an amino acid; it is a symbol representing a unit of meaning within the context of protein assembly. The physical form (nucleotide sequence) is distinct from the meaning it conveys (which amino acid to add). This is analogous to the word “cat” representing a feline creature – the sound or letters don’t physically embody the cat but symbolize the concept.

    Therefore, DNA, specifically the genetic code and the translation system that interprets it, functions as a sophisticated semiotic code. It involves arbitrary relationships between signifiers (codons) and signifieds (amino acids), mediated by an independent interpretive framework (translation machinery) for the purpose of constructing functional proteins (goal-directed information transfer).

    Conclusion: Therefore, DNA Requires Agent Causation in its Origin

    Based on the premises established:

    1. The creation of a semiotic code, characterized by arbitrary conventions, an independent interpretive framework, and symbolic representation for goal-directed information transfer, requires the specific capacities associated with intelligence and agent causation (intentionality, abstraction, rule-creation, system design).
    2. DNA, through the genetic code and its translation machinery, functions as a semiotic code exhibiting these very characteristics.

    It logically follows that the origin of DNA’s semiotic structure requires agent causation. The arbitrary nature of the code assignments and the existence of a complex system specifically designed to read and act upon these arbitrary rules, independent of direct physical necessity between codon and amino acid, are hallmarks of intelligent design, not the expected outcomes of undirected physical or chemical processes.

    Addressing Potential Objections:

    • Evolution and Randomness: While natural selection can act on variations in existing biological systems, it requires a self-replicating system with heredity – which presupposes the existence of a functional coding and translation system. Natural selection is a filter and modifier of existing information; it is not a mechanism for generating a semiotic code from scratch. Randomness, by definition, lacks the capacity to produce the specified, functional, arbitrary conventions and the integrated interpretive machinery characteristic of a semiotic code. The challenge is not just sequence generation, but the origin of the meaningful, rule-governed relationship between sequences and outcomes, and the system that enforces these rules.
    • “Frozen Accident” and Abiogenesis Challenges: Hypotheses about abiogenesis and early life (like the RNA world) face significant hurdles in explaining the origin of this integrated semiotic system. The translation machinery is a highly complex and interdependent system (a “chicken-and-and egg” problem where codons require tRNAs and synthetases to be read, but tRNAs and synthetases are themselves encoded by and produced through this same system). The origin of the arbitrary codon-amino acid assignments and the simultaneous emergence of the complex machinery to interpret them presents a significant challenge for gradual, undirected assembly driven solely by chemical or physical affinities.
    • Biochemical Processes vs. Interpretation: The argument does not claim that a ribosome is a conscious entity “interpreting” in the human sense. Instead, it argues that the system it is part of (the genetic code and translation machinery) functions as an interpretive framework because it reads symbols (codons) and acts according to established, arbitrary rules (the genetic code’s assignments) to produce a specific output (amino acid sequence), where this relationship is not based on direct physical necessity but on a mapping established by the code’s design. This rule-governed, symbolic mapping, independent of physical causation between symbol and meaning, is the defining feature of a semiotic code requiring an intelligence to establish the rules and the system.
    • God-of-the-Gaps: This argument is not based on mere ignorance of a natural explanation. It is a positive argument based on the nature of the phenomenon itself. Semiotic codes, wherever their origin is understood (human language, computer code), are the products of intelligent activity involving the creation and implementation of arbitrary conventions and interpretive systems for goal-directed communication. The argument posits that DNA exhibits these defining characteristics and therefore infers a similar type of cause in its origin, based on a uniformity of experience regarding the necessary preconditions for semiotic systems.

    In conclusion, the sophisticated, arbitrary, and rule-governed nature of the genetic code and its associated translation machinery point to it being a semiotic system. Based on the inherent requirements for creating such a system—namely, the capacities for intentionality, symbolic representation, rule-creation, and system design—the origin of DNA’s information is best explained by the action of an intelligent agent.

  • Chromosome 2 Fusion: Evidence Out Of Thin Air?

    Chromosome 2 Fusion: Evidence Out Of Thin Air?

    The story is captivating and frequently told in biology textbooks and popular science: humans possess 46 chromosomes while our alleged closest relatives, chimpanzees and other great apes, have 48. The difference, evolutionists claim, is due to a dramatic event in our shared ancestry – the fusion of two smaller ape chromosomes to form the large human Chromosome 2. This “fusion hypothesis” is often presented as slam-dunk evidence for human evolution from ape-like ancestors. But when we move beyond the narrative and scrutinize the actual genetic data, does the evidence hold up? A closer look suggests the case for fusion is far from conclusive, perhaps even bordering on evidence conjured “out of thin air.”

    The fusion model makes specific predictions about what we should find at the junction point on Chromosome 2. If two chromosomes, capped by protective telomere sequences, fused end-to-end, we’d expect to see a characteristic signature: the telomere sequence from one chromosome (repeats of TTAGGG) joined head-to-head with the inverted telomere sequence from the other (repeats of CCCTAA). These telomeric repeats typically number in the thousands at chromosome ends.  

    The Missing Telomere Signature

    When scientists first looked at the proposed fusion region (locus 2q13), they did find some sequences resembling telomere repeats (IJdo et al., 1991). This was hailed as confirmation. However, the reality is much less convincing than proponents suggest.

    Instead of thousands of ordered repeats forming a clear TTAGGG…CCCTAA structure, the site contains only about 150 highly degraded, degenerate telomere-like sequences scattered within an ~800 base pair region. Searching a much larger 64,000 base pair region yields only 136 instances of the core TTAGGG hexamer, far short of a telomere’s structure. Crucially, the orientation is often wrong – TTAGGG motifs appear where CCCTAA should be, and vice-versa. This messy, sparse arrangement hardly resembles the robust structure expected from even an ancient, degraded fusion event.

    Furthermore, creationist biologist Dr. Jeffrey Tomkins discovered that this alleged fusion site is not merely inactive debris; it falls squarely within a functional region of the DDX11L2 gene, likely acting as a promoter or regulatory element (Tomkins, 2013). Why would a supposedly non-functional scar from an ancient fusion land precisely within, and potentially regulate, an active gene? This finding severely undermines the idea of it being simple evolutionary leftovers.

    The Phantom Centromere

    A standard chromosome has one centromere. Fusing two standard chromosomes would initially create a dicentric chromosome with two centromeres – a generally unstable configuration. The fusion hypothesis thus predicts that one of the original centromeres must have been inactivated, leaving behind a remnant or “cryptic” centromere on Chromosome 2.  

    Proponents point to alpha-satellite DNA sequences found around locus 2q21 as evidence for this inactivated centromere, citing studies like Avarello et al. (1992) and the chromosome sequencing paper by Hillier et al. (2005). But this evidence is weak. Alpha-satellite DNA is indeed common near centromeres, but it’s also found abundantly elsewhere throughout the genome, performing various functions.  

    The Avarello study, conducted before full genome sequencing, used methods that detected alpha-satellite DNA generally, not functional centromeres specifically. Their results were inconsistent, with the signal appearing in less than half the cells examined – hardly the signature of a definite structure. Hillier et al. simply noted the presence of alpha-satellite tracts, but these specific sequences are common types found on nearly every human chromosome and show no unique similarity or phylogenetic clustering with functional centromere sequences. There’s no compelling structural or epigenetic evidence marking this region as a bona fide inactivated centromere; it’s simply a region containing common repetitive DNA.

    Uniqueness and the Mutation Rate Fallacy

    Adding to the puzzle, the specific short sequence often pinpointed as the precise fusion point isn’t unique. As can be demonstrated using the BLAT tool, this exact sequence appears on human Chromosomes 7, 19, and the X and Y chromosomes. If this sequence is the unique hallmark of the fusion event, why does it appear elsewhere? The evolutionary suggestion that these might be remnants of other, even more ancient fusions is pure speculation without a shred of supporting evidence.

    The standard evolutionary counter-argument to the lack of clear telomere and centromere signatures is degradation over time. “The fusion happened millions of years ago,” the reasoning goes, “so mutations have scrambled the evidence.” However, this explanation crumbles under the weight of actual mutation rates.

    Using accepted human mutation rate estimates (Nachman & Crowell, 2000) and the supposed 6-million-year timeframe since divergence from chimps, we can calculate that the specific ~800 base pair fusion region would be statistically unlikely to have suffered even one mutation during that entire period! The observed mutation rate is simply far too low to account for the dramatic degradation required to turn thousands of pristine telomere repeats and a functional centromere into the sequences we see today. Ironically, the known mutation rate argues against the degradation explanation needed to salvage the fusion hypothesis.

    Common Design vs. Common Ancestry

    What about the general similarity in gene order (synteny) between human Chromosome 2 and chimpanzee chromosomes 2A and 2B? While often presented as strong evidence for fusion, similarity does not automatically equate to ancestry. An intelligent designer reusing effective plans is an equally valid, if not better, explanation for such similarities. Moreover, the “near identical” claim is highly exaggerated; large and significant differences exist in gene content, control regions, and overall size, especially when non-coding DNA is considered (Tomkins, 2011, suggests overall similarity might be closer to 70%). This makes sense when considering that coding regions function to provide the recepies for proteins (which similar life needs will share similarly).

    Conclusion: A Story Of Looking for Evidence

    When the genetic data for human Chromosome 2 is examined without the pre-commitment to an evolutionary narrative, the evidence for the fusion event appears remarkably weak. So much so that it begs the question, was this a mad-dash to explain the blatent differences in the genomes of Humans and Chimps? The expected telomere signature is absent, replaced by a short, jumbled sequence residing within a functional gene region. The evidence for a second, inactivated centromere relies on the presence of common repetitive DNA lacking specific centromeric features. The supposed fusion sequence isn’t unique, and known mutation rates are woefully insufficient to explain the degradation required by the evolutionary model over millions of years.

    The chromosome 2 fusion story seems less like a conclusion drawn from compelling evidence and more like an interpretation imposed upon ambiguous data to fit a pre-existing belief in human-ape common ancestry. The scientific data simply does not support the narrative. Perhaps it’s time to acknowledge that the “evidence” for this iconic fusion event may indeed be derived largely “out of thin air.”

    References: