Creation Questions

Tag: Biology

  • Introduction To Created Heterozygosity

    Introduction To Created Heterozygosity

    Introduction

    Evolution by natural selection is a foundational theory in biology, observable in bacteria developing resistance, finch beak size changes, and populations adapting to environments. These microevolution examples are experimentally verified and widely accepted.

    A deeper question persists: Are the mechanisms of random mutation and natural selection sufficient to explain not only the modification of existing biological structures, but also their original creation? Specifically, can the processes observed in generating variation within species account for the origin of entirely novel protein folds, enzymatic functions, and the fundamental molecular machinery of life?

    This essay addresses this question by systematically evaluating the proposed mechanisms for evolutionary innovation, identifying their constraints, and highlighting what appears to be a fundamental limit: the origin of complex protein architecture.

    Part I: The Mechanisms of Modification

    Gene Duplication: Copy, Paste, Edit

    The most commonly cited mechanism for evolutionary innovation is gene duplication. The logic is straightforward: when a gene is accidentally copied during DNA replication, the organism now has two versions. One copy maintains the original function (keeping the organism alive), while the redundant copy is “free” to mutate without immediate lethal consequences.

    In theory, this freed copy can acquire new functions through random mutation—a process called neofunctionalization. Over time, what was once a single-function gene becomes a gene family with diverse, related functions.

    This mechanism is real and well-documented. For instance, in “trio” studies (father, mother, child), we regularly see de novo copy number variations (CNVs). Based on this, we can trace gene families back through evolutionary history and see convincing evidence of duplication events. However, gene duplication has important limitations:

    Dosage sensitivity: Cells operate as finely tuned chemical systems. Doubling the amount of a protein often disrupts this balance, creating harmful or even lethal effects. The cell isn’t simply tolerant of extra copies—duplication frequently imposes an immediate cost.

    Subfunctionalization: Rather than one copy evolving a bold new function, duplicate genes more commonly undergo subfunctionalization—they degrade slightly and split the original function between them. What was once done by one gene is now accomplished by two, each doing part of the job. This adds genomic complexity but doesn’t create novel capabilities.

    The prerequisite problem: Most fundamentally, gene duplication requires a functional gene to already exist. It’s a “copy-paste-edit” mechanism. It can explain variations on a theme—how you get a family of related enzymes—but it cannot explain the origin of the first member of that family.

    Evo-Devo: Rewiring the Switches

    Evolutionary developmental biology (evo-devo) revealed something crucial: many major morphological changes don’t come from inventing new genes, but from rewiring when and where existing genes are expressed. Mutations in regulatory elements—the “switches” that control genes—can produce dramatic changes in body plans.

    A classic example: the difference between a snake and a lizard isn’t that snakes invented fundamentally new genes. Rather, mutations in regulatory regions altered the expression patterns of Hox genes (master developmental regulators), eliminating limb development while extending the body axis.

    This mechanism helps explain how evolution can produce dramatic morphological diversity without constantly inventing new molecular parts. But it has clear boundaries:

    The circuitry prerequisite: Regulatory evolution presupposes the existence of a sophisticated, modular regulatory network—the Hox genes themselves, enhancer elements, transcription factor binding sites. This network is enormously complex. Evo-devo explains how to rearrange the blueprint, but not where the drafting tools came from.

    Modification, not creation: You can turn genes on in new places, at new times, in new combinations. You can lose structures (snakes losing legs). But you cannot regulatory-mutate your way to a structure whose genetic basis doesn’t already exist. You’re rearranging existing parts, not forging new ones.

    Exaptation: Shifting Purposes

    Exaptation describes how traits evolved for one function can be co-opted for another. Feathers, possibly first used for insulation or display, were later recruited for flight. Swim bladders in fish became lungs in land vertebrates.

    This is an important concept for understanding evolutionary pathways—it explains how structures can be preserved and refined even when their ultimate function hasn’t yet emerged. But exaptation is a description of changing selective pressures, not a mechanism of generation. It tells us how a trait might survive intermediate stages, but not how the physical structure arose in the first place.

    Part II: The Hard Problem—De Novo Origins

    The mechanisms above all share a common feature: they are remixing engines. They shuffle, duplicate, rewire, and repurpose existing genetic material. This works brilliantly for generating diversity and adaptation. But it raises an unavoidable question: Where did the original material come from?

    This is where the inquiry becomes more challenging.

    De Novo Gene Birth: From Junk to Function?

    To tackle this question, we examine the hypothesis that new genes can arise from previously non-coding “junk” DNA—an idea central to de novo gene birth.

    One hypothesis is that non-coding DNA—sometimes called “junk DNA”—occasionally gets transcribed randomly. If a random mutation creates an open reading frame (a start codon, some codons, a stop codon), you might produce a random peptide. Perhaps, very rarely, this random peptide does something useful, and natural selection preserves and refines it.

    This mechanism has some support. We do see “orphan genes” in various lineages—genes with no clear homologs in related species, suggesting recent origin. When we examine these orphan genes, many are indeed simple: short, intrinsically disordered proteins with low expression levels.

    But here’s where we hit the toxicity filter—a fundamental physical constraint.

    The Toxicity Filter

    Protein synthesis is energetically expensive, consuming up to 75% of a growing cell’s energy budget. When a cell produces a protein, it’s making an investment. If that protein immediately misfolds and gets degraded by the proteasome, the cell has just run a futile cycle—burning energy to produce garbage.

    In a competitive environment (which is where natural selection operates), a cell wasting energy on useless proteins will be outcompeted by leaner, more efficient cells. This creates strong selection pressure against expressing random, non-functional sequences.

    It gets worse. Cells have a limited capacity for handling misfolded proteins. Chaperone proteins help fold new proteins correctly, and the proteasome system degrades those that fail. But these are finite resources. If a cell produces too many difficult-to-fold or misfolded proteins, it triggers the Unfolded Protein Response (UPR).

    The UPR is an emergency protocol. Initially, the cell tries to fix the problem—producing more chaperones, slowing translation. But if the stress is too severe, the UPR switches from “repair” to “abort”: the cell undergoes apoptosis (programmed cell death) to protect the organism.

    This creates a severe constraint: natural selection doesn’t just fail to reward complex random sequences—it actively punishes them. The toxicity filter eliminates complex precursors before they have a chance to be refined.

    The Result

    The “reservoir” of potentially viable de novo genes is therefore biased heavily toward simple, disordered, low-expression peptides. These can slip through because they don’t trigger the toxicity filters. They don’t misfold (because they don’t fold), and at low expression, they don’t drain significant resources.

    This explains the orphan genes we observe: simple, disordered, regulatory, or binding proteins. But it fails to explain the origin of complex, enzymatic machinery—proteins that require specific three-dimensional structures to catalyze reactions.

    Part III: The Valley of Death

    To understand why complex enzymatic proteins are so difficult to generate de novo, we need to examine what makes them different from simple disordered proteins.

    Two Types of Proteins

    Intrinsically Disordered Proteins (IDPs) are floppy, flexible chains. They’re rich in polar and charged amino acids (hydrophilic—“water-loving”). These amino acids are happy interacting with water, so the protein doesn’t collapse into a compact structure. IDPs are excellent for binding to other molecules (they can wrap around things) and for regulatory functions (they’re flexible switches). They’re also relatively safe—they don’t aggregate easily.

    Folded Proteins, by contrast, have a hydrophobic core. Water-hating amino acids cluster in the center of the protein, away from the surrounding water. This hydrophobic collapse creates a stable, specific three-dimensional structure. Folded proteins can do things IDPs cannot: precise catalysis requires holding a substrate molecule in exactly the right geometry, which requires a rigid, well-defined active site pocket.

    The problem is that the “recipe” for these two types of proteins is fundamentally different. You can’t gradually transition from one to the other without passing through a dangerous intermediate state.

    The Sticky Globule Problem

    Imagine trying to evolve from a safe IDP to a functional folded enzyme:

    1. Start: A disordered protein—polar amino acids, floppy, safe.
    2. Intermediate: As you mutate polar residues to hydrophobic ones, you don’t immediately get a nice folded structure. Instead, you get a partially hydrophobic chain—the worst of both worlds. These “sticky globules” are aggregation-prone. They clump together like glue, forming toxic aggregates.
    3. End: A properly folded protein with a hydrophobic core and stable structure

    The middle step—the sticky globule phase—is precisely what the toxicity filter eliminates most aggressively. These partially hydrophobic intermediates are the most dangerous type of protein for a cell.

    This creates what we might call the Valley of Death: a region of sequence space that is selected against so strongly that random mutation cannot cross it. To get from a safe disordered protein to a functional enzyme, you’d need to traverse this valley—but natural selection is actively pushing you back.

    Catalysis Requires Geometry

    There’s a second constraint. Catalysis—the acceleration of chemical reactions—almost always requires a precise three-dimensional pocket (an active site) that can:

    • Position the substrate molecule correctly.
    • Stabilize the transition state.
    • Shield the reaction from water (in many cases)

    A floppy disordered protein is excellent for binding (it can wrap around things), but terrible for catalysis. It lacks the rigid geometry needed to precisely orient molecules and stabilize reaction intermediates.

    This means the “functional gradient” isn’t smooth. You can evolve binding functions with IDPs. You can evolve regulatory functions. But to evolve enzymatic function, you need to cross the valley—and the valley actively resists crossing.

    Part IV: The Escape Route—And Its Implications

    There is one clear escape route from the Valley of Death: don’t cross it at all.

    Divergence from Existing Folds

    If you already have a stable folded protein—one with a hydrophobic core and a defined structure—you can modify it safely:

    1. Duplicate it: Now you have a redundant copy.
    2. Keep the core: The hydrophobic core (the “dangerous” part) stays conserved. This maintains structural stability.
    3. Mutate the surface: The active site is usually on flexible loops outside the core. Mutate these loops to change substrate specificity, reaction type, or regulation.

    This mechanism is well-documented. It’s how modern enzyme families diversify. You get proteins that are functionally very different (digesting different substrates, catalyzing different reactions) but structurally similar—variations on the same fold.

    Critically, you never cross the Valley of Death because you never dismantle the scaffold. You’re modifying an existing, stable structure, not building one from scratch.

    The Primordial Set

    This escape route, however, comes with a profound implication: it presupposes the fold already exists.

    If modern enzymatic diversity arises primarily through divergence from existing folds rather than de novo generation of new folds, where did those original folds come from?

    The empirical data suggest a striking answer: they arose very early, and there hasn’t been much architectural innovation since.

    When we examine protein structures across all domains of life, we don’t see a continuous spectrum of novel shapes appearing over evolutionary time. Instead, we see roughly 1,000-10,000 basic structural scaffolds (fold families) that appear again and again. A bacterial enzyme and a human enzyme performing completely different functions often share the same underlying fold—the same basic architectural plan.

    Comparative genomics pushes this pattern even further back. The vast majority of these fold families appear to have been present in LUCA—the Last Universal Common Ancestor—over 3.5 billion years ago.

    The implication is stark: evolution seems to have experienced a “burst” of architectural invention right at the beginning, and has spent the subsequent 3+ billion years primarily as a remixer and optimizer, not an architect of fundamentally new structures.

    Part V: The Honest Reckoning

    We can now reassess the original question: Are the mechanisms of mutation and natural selection sufficient to explain not just the modification of life, but its origination?

    What the Mechanisms Can Do

    The neo-Darwinian synthesis is extraordinarily powerful for explaining:

    • Optimization: Taking an existing trait and refining it
    • Diversification: Creating variations on existing themes
    • Adaptation: Adjusting populations to new environments
    • Loss: Eliminating unnecessary structures
    • Regulatory rewiring: Changing when and where genes are expressed

    These mechanisms are observed, experimentally verified, and sufficient to explain the vast majority of biological diversity we see around us.

    What the Mechanisms Struggle With

    The same mechanisms face severe constraints when attempting to explain:

    • The origin of novel protein folds: The Valley of Death makes de novo generation of complex, folded, enzymatic proteins implausible under cellular conditions.
    • The origin of the primordial set: The fundamental protein architectures that all modern life relies on
    • The origin of the cellular machinery: DNA replication, transcription, translation, and error correction systems that evolution requires to function

    A New Theory

    The constraints we’ve examined—the toxicity filter, the Valley of Death, the thermodynamics of protein folding—are not “research gaps” that might be closed with more data. They are physical constraints rooted in chemistry and bioenergetics.

    Modern evolutionary mechanisms are demonstrably excellent at working with existing complexity. They can shuffle it, optimize it, repurpose it, and elaborate on it in extraordinary ways. The diversity of life testifies to its power.

    But when we trace the mechanisms back to their foundation—when we ask how the original protein folds arose, how the first enzymatic machinery came to be—we encounter a genuine boundary.

    The thermodynamics that make de novo fold generation implausible today presumably existed 3.5 billion years ago as well. Perhaps early Earth conditions were radically different in ways that bypassed these constraints—different chemistry, mineral catalysts, an RNA world with different rules. Perhaps there are mechanisms we haven’t yet discovered or understood.

    But based on what we currently understand about the mechanisms of evolution and the physics of protein folding, the honest answer to “how did those original folds arise?” is:

    They didn’t.

    We need a new explanation that can account for the data. We have excellent, mechanistic explanations for how life diversifies and adapts. We have a clear understanding of the constraints that limit those mechanisms. And we have an unsolved problem at the foundation.

    The question remains open: not as a gap in data, but as a gap in mechanism. So what mechanism can account for genetic diversity?

    Part VI: A More Parsimonious Model

    For over a century, the primary explanation for the vast diversity of life on Earth has been the slow accumulation of mutations over millions of years, filtered by natural selection. However, there is another account of the origins of life that is often left unacknowledged and dismissed as pseudoscience. The concept is simple. We see information in the form of DNA, which is, by nature, a linguistic code. Codes require minds in our repeated and uniform experience. If our experience truly tells us that evolutionary mechanisms cannot account for information systems, as we’ve discovered through this inquiry, then it stands to reason that a design solution cannot rightly be said to be “off the table.

    However, there are many forms of design, so which one fits the data?

    The answer lies in a powerful, testable model known as Created Heterozygosity and Natural Processes (CHNP). This model suggests that a designer created organisms not as genetically uniform clones, but with pre-existing genetic diversity “front-loaded” into their genomes.

    Here is why Created Heterozygosity makes scientific sense.

    A common objection to any form of young-age design model is that two people cannot produce the genetic variation seen in seven billion humans today. Critics argue that we would be clones. However, this objection assumes Adam and Eve were genetically homozygous (having two identical DNA copies).

    If Adam and Eve were created with heterozygosity—meaning their two sets of chromosomes contained different versions of genes (alleles)—they could possess a massive amount of potential variation.

    The Power of Recombination

    We observe in biology that parents pass on traits through recombination and gene conversion. These processes shuffle the DNA “deck” every generation. Even if Adam and Eve had only two sets of chromosomes each, the number of possible combinations they could produce is mind-boggling.

    If you define an allele by specific DNA positions rather than whole genes, two individuals can carry four unique sets of genomic information. Calculations show that this is sufficient to explain the vast majority of common genetic variants found in humans today without needing millions of years of mutation. In fact, most allelic diversity can be explained by only two “major” alleles.

    In short, the problem isn’t that two people can’t produce diversity; it’s that critics assume the starting pair had no diversity to begin with.

    Part VI: A Dilemma, a Ratchet, and Other Problems

    Before we go further in-depth in our explanation of CHNP, we must realise the scope of the problems with evolution. It is not just that the mechanisms are insufficient for creating novelty, that would be one thing. But we see there are insurmountable “gaps” everywhere you turn in the modern synthesis.

    The “Waiting Time” Problem

    The evolutionary model relies on random mutations to generate new genetic information. However, recent numerical simulations reveal a profound waiting time problem. Beneficial mutations are incredibly rare, and waiting for specific strings of nucleotides (genetic letters) to arise and be fixed in a population takes far too long.

    For example, establishing a specific string of just two new nucleotides in a hominin population would take an average of 84 million years. A string of five nucleotides would take 2 billion years. There simply isn’t enough time in the evolutionary timeline (e.g., 6 million years from a chimp-like ancestor to humans) to generate the necessary genetic information from scratch.

    Haldane’s Dilemma

    In 1957, the evolutionary geneticist J.B.S. Haldane calculated that natural selection is not free; it has a biological “cost”. For any specific genetic variant (mutation) to increase in a population, the individuals without that trait must effectively be removed from the gene pool—either by death or by failing to reproduce.

    This creates a dilemma for the evolutionary narrative:

    A population only has a limited surplus of offspring available to be “spent” on selection. If a species needs to select for too many traits at once, or eliminate too many mutations, the required death rate would exceed the reproductive rate, driving the species to extinction.

    Haldane calculated that for a species with a low reproductive rate like humans, the cost of fixing just one beneficial mutation would require roughly 300 generations. This speed is far too slow to explain the complexity of the human genome, even within the evolutionary timescale of millions of years.

    Rarity of Function

    From the perspective of Dr. Douglas Axe, a molecular biologist and Director of the Biologic Institute, there is a mathematically fatal challenge to the Darwinian narrative. His research focuses on the “rarity of function”—specifically, how difficult it is to find a functional protein sequence among all possible combinations of amino acids.

    Proteins are chains of amino acids that must fold into precise three-dimensional shapes to function. There are 20 different amino acids available for each position in the chain. If you have a modest protein that is 150 amino acids long, the number of possible arrangements is 20^150. This number is roughly 10^195. To put this in perspective, there are only about 10^80 atoms in the entire observable universe.

    The “search space” of possible combinations is unimaginably vast. Evolutionary theory assumes that “functional” sequences (those that fold and perform a task) are common enough that random mutations can stumble upon them. Dr. Axe tested this assumption experimentally using a 150-amino-acid domain of the beta-lactamase enzyme. In his seminal 2004 paper published in the Journal of Molecular Biology, Axe determined the ratio of functional sequences to non-functional ones.

    He calculated that the probability of a random sequence of 150 amino acids forming a stable, functional fold is approximately 1 in 10^77. This rarity is catastrophic for evolution. To find just one functional protein fold by chance would be like a blindfolded man trying to find a single marked atom in the entire Milky Way galaxy. Because functional proteins are so isolated in sequence space, natural selection cannot help “guide” the process.

    Natural selection only works after a function exists. It cannot select a protein that doesn’t work yet. Axe describes functional proteins as tiny, isolated islands in a vast sea of gibberish. This is precisely the Valley of Death we discussed earlier. You cannot “gradually” evolve from one island to another because the space between them is lethal (non-functional). Even if the entire Earth were covered in bacteria dividing rapidly for 4.5 billion years, the total number of mutational trials would be roughly 10^40. This is nowhere near the 10^77 trials needed to statistically guarantee finding a single new protein fold.

    Muller’s Ratchet

    While Haldane highlighted the cost and Axe showed the scale, Muller showed the trajectory. Muller’s Ratchet describes the mechanism of irreversible decline. The genome is not a pool of independent genes; it is organized into “linkage blocks”—large chunks of DNA that are inherited together.

    Because beneficial mutations (if they occur) are physically linked to deleterious mutations on the same chromosome segment, natural selection cannot separate them. As deleterious mutations accumulate within these linkage blocks, the overall genetic quality of the block declines. Like a ratchet that only turns one way, the damage locks in. The “best” class of genomes in the population eventually carries more mutations than the “best” class of the previous generation. Over time, every linkage block in the human genome accumulates deleterious mutations faster than selection can remove them. There is no mechanism to reverse this damage, leading to a continuous, downward slide in genetic information.

    Genetic Entropy

    According to Dr. Sanford, these factors together create a lethal dilemma for the standard evolutionary model. The combination of high mutation rates, vast fitness landscapes, the high cost of selection, and physical linkage ensures that the human genome is rusting out like an old car, losing information with every generation.

    If humanity had been accumulating mutations for millions of years, our genome would have already reached “error catastrophe,” and we would be extinct. Alexey Kondrashov described this phenomenon in his paper, “Why Have We Not Died 100 Times Over?” The fact that we are still here suggests we have only been mutating for thousands, not millions, of years.

    The vast majority of mutations are harmful or “nearly neutral” (slightly harmful but invisible to natural selection). These mutations accumulate every generation. Human mutation rates indicate we are accumulating about 100 new mutations per person per generation. If humanity were hundreds of thousands of years old, we would have gone extinct from this genetic load.

    Created Heterozygosity aligns with this reality. It posits a perfect, highly diverse starting point that is slowly losing information over time, rather than a simple starting point struggling to build information against the tide of entropy. The observed degeneration is also consistent with the Biblical account of a perfect Creation that was subjected to corruption and decay following the Fall.

    Rapid Speciation

    Proponents of CHNP do not believe in the “fixity of species.” Instead, they observe that species change and diversify over time—often rapidly. This is called “cis-evolution” (diversification within a kind) rather than “trans-evolution” (changing from one kind to another).

    Speciation often occurs when a sub-population becomes isolated and loses some of its initial genetic diversity, shifting from a heterozygous state to a more homozygous state. This reveals specific traits (phenotypes) that were previously hidden (recessive). These changes will inevitably make two populations reproductively isolated or incompatible over several generations. This particular form of speciation is sometimes called Mendelian speciation.

    Real-world examples of this can easily be found. We see this in the rapid diversification of cichlid fish in African lakes, which arose from “ancient common variations” rather than new mutations. We also see it in Darwin’s finches, where hybridization and isolation lead to rapid changes in beak size and shape. In fact, this phenomenon is so prevalent that it has its own name in the literature—contemporary evolution.

    Darwin himself noted that domestic breeds (like dogs or pigeons) show more diversity than wild species. If humans can produce hundreds of dog breeds in a few thousand years by isolating traits, natural processes acting on created diversity could easily produce the wild species we see (like zebras, horses, and donkeys) from a single created kind in a similar timeframe.

    Molecular Clocks

    Finally, when we look at Mitochondrial DNA (mtDNA)—which is passed down only from mothers—we find a “clock” that fits the biblical timeline perfectly.

    The number of mtDNA differences between modern humans fits a timescale of about 6,000 years, not hundreds of thousands. While mtDNA clocks suggest a recent mutation accumulation, nuclear DNA differences are too numerous to be explained by mutation alone in 6,000 years. This confirms that the nuclear diversity must be frontloaded (original variety), while the mtDNA diversity represents mutational history.

    Conclusion

    The Created Heterozygosity model explains the origin of species by recognizing that God engineered life with the capacity to adapt, diversify, and fill the earth. It accounts for the massive genetic variation we see today without ignoring the mathematical impossibility of evolving that information from scratch. Rather than being a reaction against science, this model embraces modern genetic data—from the limits of natural selection to the reality of genetic entropy—to provide a robust history of life.

    Part VII: Created Heterozygosity & Natural Processes

    The evidence for Created Heterozygosity, specifically within the Created Heterozygosity & Natural Processes (CHNP) model, makes several important predictions that distinguish it from the standard Darwinian explanations.

    Prediction 1: “Major” Allelic Architecture

    If the created heterozygosity is correct, each gene locus of the human line should feature no more than four predominant alleles encoding functional, distinct proteins. This is a prediction based on Adam and Eve having a total of four genome copies altogether. This prediction can be refined, however, to be even more particular.

    Based on an analysis of the ABO gene within the Created Heterozygosity and Natural Processes (CHNP) model, the evidence suggests there were only two major alleles in the original created pair (Adam and Eve), rather than the theoretical maximum of four, for the following reasons:

    1. Only A and B are Functionally Distinct “Major” Alleles

    While a single pair of humans could theoretically carry up to four distinct alleles (two per person), the molecular data for the ABO locus reveals only two distinct, functional genetic architectures: A and B. The A and B alleles code for functional glycosyltransferase enzymes. They differ from each other by only seven nucleotides, four of which result in amino acid changes that alter the enzyme’s specificity. In an analysis of 19 key human functional loci, ABO is identified as having “dual majors.” These are the foundational, optimized alleles that are highly conserved and predate human diversification. Because A and B represent the only two functional “primordial” archetypes, the CHNP model posits that the original ancestors possessed the optimal A/B heterozygous genotype.

    2. The ‘O’ Allele is a Broken ‘A.’

    The reason there are not three (or four) original alleles (e.g., A, B, and O) is that the O allele is not a distinct, original design. It is a degraded version of the A allele.

    The most common O allele (O01) is identical to the A allele except for a single guanine deletion at position 261. This deletion causes a frameshift mutation, resulting in a truncated, non-functional enzyme. Because the O allele is simply a broken A allele, it represents a loss of information (genetic entropy) rather than originally created diversity. The CHNP model predicts that initial kinds were highly functional and optimized, containing no non-functional or suboptimal gene variants. Therefore, the non-functional O allele would not have been present in the created pair but arose later through mutation.

    3. AB is Optimal For Both Parents

    A critical medical argument for the AB genotype in both parents (and therefore 2 Major created alleles) concerns the immune system and pregnancy. The CHNP model suggests that an optimized creation would minimize physiological incompatibility between the first mother and her offspring.

    In the ABO system, individuals naturally produce antibodies against the antigens they lack. A person with Type ‘A’ blood produces anti-B antibodies; a person with Type ‘B’ produces anti-A antibodies; and a person with Type O produces both.

    Individuals with Type AB blood produce neither anti-A nor anti-B antibodies because they possess both antigens on their own cells.

    If the original mother (Eve) were Type A, she would carry anti-B antibodies, which could potentially attack a Type B or AB fetus (Hemolytic Disease of the Newborn). However, if she were Type AB, her immune system would tolerate fetuses of any blood type (A, B, or AB) because she lacks the antibodies that would attack them.

    If there were more than two original antigens, these problems would be inevitable. The only solution is for both parents to share the same two antigens.

    4. Disclaimer about scope

    This, along with many other examples within the gene catalogue, suggests most, if not all, original gene loci were bi-allelic for homozygosity. This is not to say all were, as we do not have definitive proof of that, and there are several, e.g., immuno-response genes, loci which could theoretically have more than two Majors. However, it is highly likely that all genetic diversity can be explained by bi-genome, and not quad-genome, diversity. Greater modern diversity, if present, can consistently be partitioned into two functional clades, with subsidiary alleles emerging via SNPs, InDels, or recombinations over short timescales.

    Prediction 2: Cross-Species Conservation

    Having similar genes is essential in a created world in order for ecosystems to exist; it shouldn’t be surprising that we share DNA with other organisms. From that premise, it follows that some organisms will be more or less similar, and those can be categorized. Due to the laws of physics and chemistry, there are inherent design constraints on forms of biota. Due to this, it is expected that there will be functional genes that are shared throughout life where they are applicable. For instance, we share homeobox genes with much of terrestrial life, even down to snakes, mice, flies, and worms. These genes are similar because they have similar functions. This is precisely what we would predict from a design hypothesis.

    Both models (CHNP and EES) predict that there will be some shared functional operations throughout all life. Although this prediction does lean more in favor of a design hypothesis, it is roughly agnostic evidence. However, what is a differentiating prediction is that “major” alleles will persist across genera, reflecting shared functional design principles, whereas non-functional variants will be species-specific. This prediction is due to the two models ’ different understandings of the power of evolutionary processes to explain diversity.

    This prediction can be tested (along with the first) by examining allelic diversity (particularly in sequence alignment) across related and non-related populations. For instance, take the ABO blood type gene again. The genetic data confirm that functional “major” alleles are conserved across species boundaries, while non-functional variants are species-specific and recent.

    1. Major Alleles (A and B): Shared Functional Design

    Both models acknowledge that the functional A and B alleles are shared between humans and other primates (and even some distinct mammals). However, the interpretation differs, and the CHNP model posits this as evidence of major allelic architecture—original, front-loaded functional templates.

    The functional A and B alleles code for specific glycosyltransferase enzymes. Sequence analysis shows that humans, chimpanzees, and bonobos share the exact same genetic basis for these polymorphisms. This fits the prediction that “major” alleles represent the optimized, original design. Because these alleles are functional, they are conserved across genera (trans-species), reflecting a common design blueprint rather than convergent evolution or deep-time descent.

    Standard evolution attributes this to “trans-species polymorphism,” arguing that these alleles have been maintained by “balancing selection” for 20 million years, predating the divergence of humans and apes.

    2. Non-Functional Alleles (Type O): The Differentiating Test

    The crucial test arises when examining the non-functional ‘O’ allele. Because the ‘O’ allele confers a survival advantage against severe malaria, the standard evolutionary model must do one of the following: 1) explain why it is not the case that A and B, the ‘O’ alleles are not all three ancient and shared across lineages (trans-species inheritance), or provide an example of a shared ‘O’ allele across a kind-boundary. The reason why this prediction must follow is that the ‘O’ allele, being the null version, by evolutionary definition must have existed prior to either ‘A’ or ‘B’. What’s more, ‘A’ and ‘B’ alleles can easily break and the ‘O’ is not significant enough to be selected out of a given population.

    In humans, the most common ‘O’ allele (O01) results from a specific single nucleotide deletion (a guanine deletion at position 261), causing a frameshift that breaks the enzyme. However, sequence analysis of chimpanzees and other primates reveals that their ‘O’ alleles result from different, independent mutations.

    Human and non-human primate ‘O’ alleles are species-specific and result from independent silencing mutations. The mutation that makes a chimp Type ‘O’ is not the same mutation that makes a human Type ‘O’.

    This supports the CHNP prediction that non-functional variants arise after the functional variants from recent genetic entropy (decay) rather than ancient ancestry. The ‘O’ allele is not a third “created” allele; it is a broken ‘A’ allele that occurred independently in humans and chimps after they were distinct populations. It has become fixated in populations, such as those native to the Americas, due to the beneficial nature of the gene break.

    This brings us, also, back to the evolutionary problems we mentioned. Even if these four or more beneficial mutations could occur to create one ‘A’ or ‘B’ allele, which we discussed as being incredibly unlikely, either gene would break likely at a faster rate (due to Muller’s Ratchet) than could account for the fixity of A and B in primates and other mammals.

    3. Timeline and Entropy

    The mutational pathways for the human ‘O’ allele fit a timeline of <10,000 years, appearing after the initial “major” alleles were established. This aligns with the CHNP view that variants arise via minimal genetic changes (SNPs, Indels) within the last 6,000–10,000 years.

    The emergence of the ‘O’ allele is an example of cis-evolution (diversification within a kind via information loss). It involves breaking a functional gene to gain a temporary survival advantage (malaria resistance), which is distinct from the creation of new biological information.

    4. Broader Loci Analysis

    This pattern is not unique to ABO. An analysis of 19 key human functional loci (including genes for immunity, metabolism, and pigmentation) confirms the “Major Allele” prediction:

    Out of the 19 loci, 16 exhibit a single (or dual, like ABO) major functional allele that is highly conserved across species. Meaning that the functional versions of the genes are shared with other primates, mammals, vertebrates, or even eukaryotes. In contrast, non-functional or pathogenic variants (such as the CCR5-Δ32 deletion or CFTR mutations) are predominantly human-specific and arose recently (often <10,000 years ago). And when similar non-functional traits appear in different species (e.g., MC1R-loss, or ‘O’ blood group), they are due to convergent, independent mutations, not shared ancestry.

    To illustrate this point, below is a graph from the paper testing the CHNP model in 19 functional genes. Table 1 summarizes key metrics for each locus. Across the dataset, 84% (16/19) exhibit a single major functional allele conserved >90% across mammals/primates, with variants emerging <50,000 years ago (kya). ABO and HLA-DRB1 align with dual ancient clades; SLC6A4 shows neutral biallelic drift. Non-functional variants (e.g., nulls, deficiencies) are human-specific in 89% of cases, often via single SNPs/InDels.

    LocusMajor Allele(s)Functional Groups (Ancient?)Cross-Species ConservationVariant Derivation (Changes/Time)Model Fit (1/2/3)
    HLA-DRB1Multiple lineages (e.g., *03, *04)2+ ancient clades (pre-Homo-Pan)High in primates (trans-species)Recombinations/SNPs; post-speciation (~100 kya)Strong (clades); Partial (multi); Strong
    ABOA/B (O derived)2 ancient (A/B trans-species)High in primatesInactivation (1 nt del.); <20 kyaStrong; Strong; Strong
    LCTAncestral non-persistent (C/C)1 majorHigh across mammalsSNPs (e.g., -13910T); ~10 kyaStrong; Strong; Strong
    CFTRWild-type (non-ΔF508)1 majorHigh across vertebrates3 nt del. (ΔF508); ~50 kyaStrong; Strong; Strong
    G6PDWild-type (A+)1 majorHigh (>95% identity)SNPs at conserved sites; <10 kyaStrong; Strong; Strong
    APOEε4 (ancestral)1 major (ε3/2 derived)High across mammalsSNPs (Arg158Cys); <200 kyaStrong; Strong; Partial
    CYP2D6*1 (wild-type)1 majorModerate in primatesDeletions/duplications; recentStrong; Partial; Strong
    FUT2Functional secretor1 majorHigh in vertebratesTruncating SNPs; ancient nulls (~100 kya)Strong; Strong; Partial
    HBBWild-type (HbA)1 majorHigh across vertebratesSNPs (e.g., sickle Glu6Val); <10 kyaStrong; Strong; Strong
    CCR5Wild-type1 majorHigh in primates32-bp del.; ~700 yaStrong; Strong; Strong
    SLC24A5Ancestral Ala111 (dark skin)1 majorHigh across vertebratesThr111 SNP; ~20–30 kyaStrong; Strong; Strong
    MC1RWild-type (eumelanin)1 majorHigh across mammalsLoss-of-function SNPs; convergent in someStrong; Partial (conv.); Strong
    ALDH2Glu504 (active)1 majorHigh across eukaryotesLys504 SNP; ~2–5 kyaStrong; Strong; Strong
    HERC2/OCA2Ancestral (brown eyes)1 majorHigh across mammalsrs12913832 SNP; ~10 kyaStrong; Strong; Strong
    SERPINA1M allele (wild-type)1 majorHigh in mammals (family expansion)SNPs (e.g., PiZ Glu342Lys); recentStrong; Strong; Strong
    BRCA1Wild-type1 majorHigh in primatesFrameshifts/nonsense; <50 kyaStrong; Strong; Strong
    SLC6A4Long/short 5-HTTLPR2 neutrally evolvedHigh across animalsInDel (VNTR); ancient (~500 kya)Partial; Strong; Partial
    PCSK9Wild-type1 majorHigh in primates (lost in some mammals)SNPs (e.g., Arg469Trp); recentStrong; Strong (conv. loss); Strong
    EDARVal370 (ancestral)1 majorHigh across vertebratesAla370 SNP; ~30 kyaStrong; Strong; Strong

    Table 1: Evolutionary Profiles of Analyzed Loci. Model Fit: Tenet 1 (major architecture), 2 (conservation), 3 (derivation). “Partial” indicates minor deviations (e.g., multi-clades or potentially >10 kya).

    This is devastating for modern synthesis. If the pattern that arises is one of shared functions and not shared mistakes, the theory is dead on arrival.

    Prediction 3: Derivation Dynamics

    Another important prediction to consider is due to the timeline for creating heterozygosity. If life were designed young (an entailment for CHNP), variant alleles must have arisen from “majors” through minimal modifications, feasible within roughly 6 to 10 thousand years.

    To look at the ABO blood group once more, we see the total feasibility of this prediction. The ABO blood group, again, offers a “cornerstone” example, demonstrating how complex diversity collapses into simple, recent mutational events.

    1. The ABO Case Study: Minimal Modification

    The CHNP model identifies the A and B alleles as the original, front-loaded “major” alleles created in the founding pair. The diversity we see today (such as the various O alleles and A subtypes) supports the prediction of minimal, recent modification:

    As we’ve discussed, the most common O allele (O01) is not a novel invention; it is a broken ‘A’ allele. It differs from the ‘A’ allele by a single guanine deletion at position 261. This minute change causes a frameshift that renders the enzyme non-functional. Other ABO variants show similar minimal changes. The A2 allele (a weak version of A) results from a single nucleotide deletion and a point mutation. The B3 allele results from point mutations that reduce enzymatic activity.

    These are not complex architectural changes requiring millions of years. They are “typos” in the code. Molecular analysis confirms that the mutation causing the O phenotype is a common, high-probability event.

    2. The Mathematical Feasibility of the Timeline

    A mathematical breakdown can be used to demonstrate that these variants would inevitably arise within a young-earth timeframe using standard mutation rates.

    Using a standard mutation rate (1.5×10^−8 per base pair per generation) and an exponentially growing population (starting from founders), mutations accumulate rapidly and easily. Calculations suggest that in a population growing from a small founder group, the first expected mutations in the ABO exons would appear as early as Generation 4 (approx. 80 years). Over a period of 5,000 years, with a realistic population growth model, the 1,065 base pairs of the ABO exons would theoretically experience tens of thousands of mutation events. The gene would be thoroughly saturated, meaning virtually every possible single-nucleotide change would have occurred multiple times.

    Specific estimates for the emergence of the ‘O’ allele place it within 50 to 500 generations (1,000 to 10,000 years) under neutral drift, or even faster with selective pressure. This perfectly fits the CHNP timeline of 6,000-10,000 years.

    3. Further Validation: The 19 Loci Analysis

    This pattern of “Ancient Majors, Recent Variants” is not unique to ABO. The 19 key human functional loci study also confirms that this is a systemic feature of the human genome.

    Across genes involved in immunity, metabolism, and pigmentation, derived variants consistently appear to have arisen within the last 10,000 years (Holocene). ALDH2: The variant causing the “Asian flush” (Glu504Lys) is estimated to be ~2,000 to 5,000 years old. LCT (Lactase Persistence): The mutation allowing adults to digest milk arose ~10,000 years ago, coinciding with the advent of dairy farming. HBB (Sickle Cell): The hemoglobin variant conferring malaria resistance emerged <10,000 years ago. In 89% of the analyzed cases, these variants are caused by single SNPs or Indels derived from the conserved major allele.

    The prediction that variant alleles must be derived via minimal modifications feasible within a young timeframe is strongly supported by the genetic data. The ABO system demonstrates that the “O” allele is merely a single deletion that could arise in less than 100 generations.

    This confirms the CHNP view that while the “major” alleles (A and B) represent the original, complex design (Major Allelic Architecture), the variants (O, A2, etc.) are the result of recent, rapid genetic entropy (cis-evolution) that requires no deep-time evolutionary mechanisms to explain.

    An ABO Blood Group Paradox

    As we have run through these first three predictions of the Created Heterozygosity model, we have dealt particularly with the ABO gene and have run into a peculiar evolutionary puzzle. Let’s first speak of this paradox more abstractly in the form of an analogy:

    Imagine a family of collectors who passed down two distinct types of antique coins (Coins A and B) to their descendants over centuries because those coins were valuable. If a third type of coin (Coin O) was also extremely valuable (offering protection/advantage) and easier to mint, you would predict the Ancestors would have kept Coin O and passed it down to both lineages alongside A and B. You wouldn’t expect the descendants to inherit A and B from the ancestor, but have to invent Coin O continuously from scratch every generation.

    By virtue of this same logic, evolutionary models must predict that the ‘O’ allele should be ancient (20 million years) due to balancing selection. However, the genetic data shows ‘O’ alleles are recent and arose independently in different lineages. This supports the CHNP view: the original ancestors were created with functional A and B alleles (heterozygous), and the O allele is a recent mutational loss of function.

    Prediction 4: Rapid Speciation and Adaptive Radiation

    If created heterozygosity is true, and organisms were designed with built-in potential for adaptation given their environment, then we should expect to find mechanisms of extreme foresight that permit rapid change to external stressors. There are, in fact, many such mechanisms which are written about in the scientific literature: contemporary evolution, natural genetic engineering, epigenetics, higher agency, continuous environmental tracking, non-random evolution, evo-devo, etc.

    The phenomenon of adaptive radiation—where a single lineage rapidly diversifies into many species—is clearly differentiating evidence for front-loaded heterozygosity rather than mutational evolution. Why? Because random mutation has no foresight. Random mutations do not prepare an organism for any eventuality. If it is not useful now, get rid of it. That is the mantra of evolutionary theory. That is the premise of natural selection. However, this premise is drastically mistaken.

    1. Natural Genetic Engineering & Non-Random Evolution

    The foundation of this alternative view is that genetic change is not accidental. Molecular biologist James Shapiro argues that cells are not passive victims of random “copying errors.” Instead, they possess “active biological functions” to restructure their own genomes. Cells can cut, splice, and rearrange DNA, often using mobile genetic elements (transposons) and retroviruses to rewrite their genetic code in response to stress. Shapiro calls the genome a “read-write” database rather than a read-only ROM.

    Building on this, Dr. Lee Spetner proposed that organisms have a built-in capacity to adapt to environmental triggers. These changes are not rare or accidental but can occur in a large fraction of the population simultaneously. This work is supported by modern research from people like Dr. Michael Levin and Dr. Dennis Noble. Mutations are revealing themselves to be more and more a predictable response to environmental inputs.

    2. The Architecture: Continuous Environmental Tracking (CET)

    If organisms engineer their own genetics, how do they know when to do it? This is where CET provides the engineering framework.

    Proposed by Dr. Randy Guliuzza, CET treats organisms as engineered entities. Just like a self-driving car, organisms possess input sensors (to detect the environment), internal logic/programming (to process data), and output actuators (to execute biological changes). In Darwinism, the environment is the “selector” (a sieve). In CET, the organism is the active agent. The environment is merely the data the organism tracks. For example, blind cavefish lose their eyes not because of random mutations and slow selection, but because they sense the dark environment and downregulate eye development to conserve energy, a process that is rapid and reversible. More precisely, the regulatory system of these cave fish specimens can detect the low salinity of cave water, which triggers the effect of blindness over a short timeframe.

    3. The Software: Epigenetics

    Epigenetics acts as the “formatting” or the switches for the DNA computer program. Epigenetic mechanisms (like methylation) regulate gene expression without changing the underlying DNA sequence. This allows organisms to adapt quickly to environmental cues—such as plants changing flowering times or root structures. These changes can be heritable. For instance, the environment of a parent (e.g., diet, stress) can affect the development of the offspring via RNA absorbed by sperm or eggs, bypassing standard natural selection. This blurs the line between the organism and its environment, facilitating rapid adaptation.

    4. The Result: Contemporary Evolution

    When these internal mechanisms (NGE, CET, Epigenetics) function, the result is Contemporary Evolution—observable changes happening in years or decades, not millions of years. Conservationists and biologists are observing “rapid adaptation” in real-time. Examples include invasive species changing growth rates in under 10 years, or the rapid diversification of cichlid fish in Lake Victoria.

    For Young Earth Creationists (YEC), Contemporary Evolution validates the concept of Rapid Post-Flood Speciation. It shows that getting from the “kinds” on Noah’s Ark to modern species diversity in a few thousand years is biologically feasible.

    Conclusion

    So, where does the information for all this diversity come from? This is the specific model (CHNP) that explains the source of the variation being tracked and engineered.

    This model posits that original kinds were created as pan-heterozygous (carrying different alleles at almost every gene locus). As populations grew and migrated (Contemporary Evolution), they split into isolated groups. Through sexual reproduction (recombination), the original heterozygous traits were shuffled. Over time, specific traits became “fixed” (homozygous), leading to new species.

    This model argues that random mutation cannot bridge the gap between distinct biological forms (the Valley of Death) due to toxicity and complexity. Therefore, diversity must be the result of sorting pre-existing (front-loaded) functional alleles rather than creating new ones from scratch.

    Look at it this way:

    1. Mendelian Speciation/Created Heterozygosity is the Resource: It provides the massive library of latent genetic potential (front-loaded alleles).

    2. Continuous Environmental Tracking is the Control System: It uses sensors and logic to determine which parts of that library are needed for the current environment.

    3. Epigenetics and Natural Genetic Engineering are the Mechanisms: They are the tools the cells use to turn genes on/off (epigenetics) or restructure the genome (NGE) to express those latent traits.

    4. Contemporary Evolution is the Observation: It is the visible, rapid diversification (cis-evolution) we see in nature today as a result of these internal systems working on the front-loaded information.

    Together, these concepts argue that organisms are not passive lumps of clay shaped by external forces (Natural Selection), but sophisticated, engineered systems designed to adapt and diversify rapidly within their kinds.

    The mechanism driving this diversity is the recombination of pre-existing heterozygous genes. Just 20 heterozygous genes can theoretically produce over one million unique homozygous phenotypes. As populations isolate and speciate, they lose their initial heterozygosity and become “fixed” in specific traits. This process, known as cis-evolution, explains diversity within a kind (e.g., wolves to dog breeds) but differs fundamentally from trans-evolution (evolution between kinds), which finds no mechanism in genetics.

    The CHNP model argues that mutations are insufficient to create the original genetic information due to thermodynamic and biological constraints. De novo protein creation is hindered by a “Valley of Death”—a region of sequence space where intermediate, misfolded proteins are toxic to the cell. Natural selection eliminates these intermediates, preventing the gradual evolution of novel protein folds.

    Mechanisms often cited as creative, such as gene duplication or recombination, are actually “remixing engines.” Duplication provides redundancy, not novelty, and recombination shuffles existing alleles without creating new genetic material. Because mutations are modifications (typos) rather than creations, the original functional complexity must have been present at the beginning.

    Genetics reveals that organisms contain “latent” or hidden information that can be expressed later.

    Information can be masked by dominant alleles or epistatic interactions (where one gene suppresses another). This allows phenotypic traits to remain hidden for generations and reappear suddenly when genetic combinations shift, facilitating rapid adaptation without new mutations.

    Genetic elements like transposons can reversibly activate or deactivate genes (e.g., in grape color or peppered moths), acting as switches for pre-existing varieties rather than creators of new genes.

    Summary

    The genetic evidence for created heterozygosity rests on the observation that biological novelty is ancient and conserved, while variation is recent and degenerative. By starting with ancestors endowed with high levels of heterozygosity, the “forest” of life’s diversity can be explained by the rapid sorting and recombination of distinct, front-loaded genetic programs.

  • The Irreducibility of Life

    The Irreducibility of Life

    In his paper “Life Transcending Physics and Chemistry,” Michael Polanyi examines biological machines in a way that illuminates the explanatory failures of materialism. The prevailing materialist paradigm that life can be fully explained by the laws of inanimate nature fails to account for higher ordered realities which have operations and structures that involve non-material judgements and interpretations. He specifically addresses the views of scientists such as Francis Crick, who, along with James Watson, argued for a total reductionist and nominalist view based on their discovery of DNA. For Polanyi, there is a life-transcending nature that all biological organisms have which is akin to machines and their transcendent properties. His central argument is based on the concept of “boundary control,” which argues the notion that there are laws that govern physical reactions (as Crick would accept) yet there are particular laws of form and function which are unique and separate from those lower-level laws.

    There is a real clash between Polanyi’s position and the reductionist/nominalist position which is commonly held by molecular biologists. To start to broach this divergence he explains how the contemporaneous discovery of the genetic function of DNA was interpreted as the final blow to vitalist thought within sciences. He writes:

    “The discovery by Watson and Crick of the genetic function of DNA (deoxyribonucleic acid), combined with the evidence these scientists provided for the self-duplication of DNA, is widely held to prove that living beings can be interpreted, at least in principles, by the laws of physics and chemistry.”

     Polanyi explicitly rejects Crick’s interpretation; that position is of the mainstream and popular level academia. Crick states that his principle “has so far been accepted by few biologists and has been sharply rejected by Francis Crick, who is convinced that all life can be ultimately accounted for by the laws of inanimate nature.” This same sentiment can indeed be found in Crick’s book “Molecules and Man.” Crick writes the following:

    “Thus eventually one may hope to have the whole of biology “explained” in terms of the level below it, and so on right down to the atomic level.”

    To dismantle the materialist argument, Polanyi utilizes the analogy of a machine. A machine cannot be defined or understood solely through the physical and chemical properties of its materials. Take a watch and put it into a machine that can read a detailed atomic map of the device: can even the best chemist give any coherent reason as to whether the watch is functioning or not? Worse—can one even tell you what a watch is, if all that exists is matter in motion for no particular reason? Polanyi writes it best:

    “A complete physical-chemical topography of my watch—even though the topography included the changes caused by the movements in the watch—would not tell us what this object is. On the other hand, if we know watches, we would recognize an object as a watch by a description of it which says that it tells the time of the day… We know watches and can describe one only in terms like ‘telling the time,’ ‘hands,’ ‘face,’ ‘marked,’ which are all incapable of being expressed by the variables of physics, length, mass, and time.”

    Once you see this distinction, you are invariably led (as Polanyi was) to two unique substratum of explanation; what he calls the concept of dual control. Obviously, there are physical laws which dictate constraints and operations of all material and all material things can be explained by these very laws. However, those laws are only meaningfully called constraints when there is some notion of intention or design to be constrained. The shape of any machine, man-made or biological, is not determined by natural laws. Not only is it not determined by them, it cannot be determined by them in any way. Polanyi elaborates on this relationship:

    “The machine is a machine by having been built and being then controlled according to principles of engineering. The laws of physics and chemistry are indifferent to these principles; they would go on working in the fragments of the machine if it were smashed. But they serve the machine while it lasts; machines rely for their operations always on the laws of physics and chemistry.”

    As I hinted at before, Polanyi also applies this logic to biological systems, arguing that morphology is a boundary condition in the same way that a design of a machine is a boundary condition. Biology cannot be reduced to physics because the structure that defines a living being is not the result of physical-chemical equilibration. Physical laws do not intend to create nor do they care that anything functions. Instead, “biological principles are seen then to control the boundary conditions within which the forces of physics and chemistry carry on the business of life.”

    Where Polanyi and Crick truly have the disagreement, then, is in their interpretation of the explanatory power of nature and how DNA is implicated within these frameworks. While Crick views DNA as a chemical agent that proves reducibility, Polanyi argues that the very nature of DNA as an information carrier proves the opposite. For a molecule to function as a code, its sequence cannot be determined by chemical necessity. If chemical laws dictated the arrangement of the DNA molecule, it would be a rigid crystal incapable of conveying complex, variable information. Polanyi writes:

    “Thus in an ideal code, all alternative sequences being equally probable, its sequence is unaffected by chemical laws, and is an arithmetical or geometrical design, not explicable in chemical terms.”

    By treating DNA as a transmitter of information, Polanyi aligns it with other non-physical forms of communication, such as a book. The physical chemistry of the ink and paper does not explain the content of the text. Similarly, the chemical properties of DNA do not explain the genetic information it carries. Polanyi contends that Crick’s own theory inadvertently supports this non-materialist conclusion:

    “The theory of Crick and Watson, that four alternative substituents lining a DNA chain convey an amount of information approximating that of the total number of such possible configurations, amounts to saying that the particular alignment present in a DNA molecule is not determined by chemical forces.”

    Therefore, the pattern of the organism, derived from the information in DNA, represents a constraint that physics cannot explain. It is a boundary condition that harnesses matter. Polanyi concludes that the organization of life is a specific, highly improbable configuration that transcends the laws governing its atomic constituents:

    “When this structure reappears in an organism, it is a configuration of particles that typifies a living being and serves its functions; at the same time, this configuration is a member of a large group of equally probable (and mostly meaningless) configurations. Such a highly improbable arrangement of particles is not shaped by the forces of physics or chemistry. It constitutes a boundary condition, which as such transcends the laws of physics and chemistry.”

    In this way, Polanyi refutes the nominalist materialist perspective by demonstrating that the governing principles of life—its form, function, and information content—are logically distinct from, and irreducible to, the physical laws that govern inanimate matter. Physical laws are, then, merely a piece of the puzzle of the explanation. What’s more, they are insufficient to account for the existence of particular organizations of matter which physical laws and chemistry are not determinative of.

  • Specious Extrapolations in Origin of Species

    Specious Extrapolations in Origin of Species

    In The Origin of Species, Darwin outlines evidence against the contemporary notion of species fixity, i.e., the idea that species represent immovable boundaries. He first uses the concepts of variations alongside his introduced mechanism of natural selection to create a plausible case for not merely variations, breeds, or races of organisms, but indeed species as commonly descended. Then, in chapter 4, after introducing a taxonomic tree as a picture of biota diversification, he writes, 

    “I see no reason to limit the process of modification, as now explained, to the formation of genera alone.”

    This sentence encapsulates the theoretical move that introduced the concept of universal common ancestry as a permissible and presently accepted scientific model. There is much to discuss regarding the arguments and warrants of the modern debate; however, let us take Darwin on his own terms. In those premier paragraphs of his seminal work, was Darwin’s extrapolation merited? Do the mechanisms and the evidence put forth for them bring us to this inevitable conclusion, or perhaps is the argument yet inconclusive? In this essay, we will argue that, while Darwin’s analogical reasoning was ingenious, his reliance on uniformitarianism and nominalism may render his extrapolation less secure than it first appears.

    In order to explain this, one must first understand the logical progression Darwin must follow. There are apparently three major assumptions—or premises. These are (1) analogism–artificial selection is analogous to natural selection, (2) uniformitarianism–variation is a mostly consistent and uniform process through biological time, and (3) nominalism–all variations and, therefore, all forms, vary by degree only and not kind. Here, we use ‘nominalism’ in the sense that species categories reflect human classification rather than intrinsic natural divisions, a position Darwin implicitly adopts.

    Of his three assumptions, one shows itself to be particularly strong—that of analogism. He spends most of the first four chapters defending this premise from multiple angles. He goes into detail on the powers of artificial selection in chapter one. His detail helps us identify which particular aspect of artificial selection leads to the observed robustness and fitness within its newly delineated populations. For this, he highlights mild selection over a long time. While one can see a drastic change in quick selection, this type of selection is less sustainable. It offers a narrower range of variable options (as variations take time to emerge).

    However, even with this carefully developed premise, let us not overlook its flaws. Notice that the evidence for the power of long-term selection is said to show that it brings about more robust or larger changes within some organisms in at least some environments. However, what evidence does Darwin present to demonstrate this case?

    Darwin does not provide a formal, quantifiable, long-term experiment to demonstrate the superiority of mild, long-term selection. Instead, he relies on descriptive, historical examples from breeders’ practices and then uses a logical argument based on the nature of variation. Thus, Darwin’s appeal demonstrates plausibility, not proof. This is an important distinction if one is to treat natural selection as a mechanism of universal transformation rather than limited adaptation.

    Even still, the extrapolation of differential selection and the environment’s role in that is not egregiously contentious or strange. Moreover, perhaps surprisingly, the assumption of analogism seems to be the most mutable extrapolation. The processes which stand in more doubt are Uniformitarianism and Nominalism (which will be the issue of the rest of this essay). The assumptions of uniformitarianism and nominalism undergird Darwin’s broader inference. When formalized, they resemble the following abductive arguments:

    Argument from Persistent Variation and Selection:

    Premise 1: If the mechanisms of variation and natural selection are persistent through time, then we can infer universal common descent.

    Premise 2: The mechanisms of variation and natural selection are persistent through time.

    Conclusion: Therefore, we can infer universal common descent.

    Argument from Difference in Degree:

    Premise 1: If all life differs only by degree and not kind, then we can infer that variation is a sufficient process to create all modern forms of life.

    Premise 2: All life differs only by degree and not kind,

    Conclusion: Therefore, we infer that variation is a sufficient process to create all modern forms of life.

    From these inferential conclusions, we see the importance of the two final assumptions as a fountainhead of the stream of Darwinian theory. 

    Before moving on, a few disclaimers are in order. It is worth noting that both arguments are contingent on the assumption that biology has existed throughout long geological time scales, but that is to be put aside for now. Notice we are now implicitly granting the assumption of analogism, and this imported doctrine is, likewise, essential to any common descent arguments. Finally, it is also worth clarifying that Darwin’s repeated insistence that ‘no line of demarcation can be drawn’ between varieties and species exemplifies the nominalist premise on which this argument from degree depends.

    To test these assumptions and determine whether they are as plausible as Darwin takes them to be, we first need to examine their constituent evidence and whether they provide empirical or logical support for Darwin’s thesis.

    The uniformitarian view can be presented in several ways. For Darwin, the view was the lens through which he saw biology, based on the Principles of Geology as articulated by Charles Lyell. Overall, it is not a poor inferential standard by any means. There are, however, certain caveats that limit its relevance in any science. Essentially, the mechanism in question must be precisely known, in that what X can do is never extrapolated into what X cannot do as part of its explanatory power. 

    How Darwin frames the matter is to say, “I observe X happening at small scales, therefore X can accumulate indefinitely.” This is not inherently incorrect or poor science in and of itself. However, one might ask: if one does not know the specific mechanisms involved in this variation process, is it really plausible to extrapolate these unknown variables far into the past or the future? Without knowing how variation actually works (no Mendelian genetics, no understanding of heredity’s material basis), Darwin is in a conundrum. He cannot justify the assumption that variation is unlimited if he cannot explain what it would even mean for that proposition to be true across deep time. It is like measuring the Mississippi’s sediment deposition rate, as was done for over 170 years, and extrapolating it back in time, when the river spanned the Gulf of Mexico. Alternatively, it is like measuring the processes of water erosion along the White Cliffs of Dover and extrapolating back in time until it reaches the European continent. In the first case, there is an apparent flaw in assuming constant deposition rates. In the second case, it is evident that water alone could not have caused the original break between England and France.

    It is the latter issue that is of deep concern here. There are too many unknowns in this equation to make it remotely scientific. It is not true that observing a phenomenon consistently requires understanding its mechanisms to extrapolate. However, Darwin’s theory is historical in a way that gravity, disease, or early mechanistic explanations were not. It cannot be immediately tested. Darwin, at best, leaves us to do the bulk of the grunt work after indulging in what can only be called guesswork.

    Darwin’s second line of reasoning to reach the universal common ancestry thesis relies heavily on a philosophical view of reality: nominalism. For nominalism to be correct, all traits and features would need to be quantitatively different (longer/shorter, harder/softer, heavier/lighter, rougher/smoother) without any that are qualitatively different (light/dark, solid/liquid/gas, color/sound, circle/square). In order to determine whether biology contains quality distinctions, we must understand how and in what way kinds become differentiable.

    The best polemical examples of discrete things, which differ more than just in degree, are colors. Colors could be hard to pin down on occasion. Darwin would have an easy time, as he did with species and variation taxonomical discourse, pointing out the divisive groups of thought in the classification of colors. Intuitively, there is a straightforward flow of some red to some blue. Even if they are mostly distinguishable, is not that cloud or wash of in-betweens enough to question the whole enterprise of genuine or authentic categories?

    However, moving from blue to yellow is not just an increase or decrease in something; it is a change to an entirely new color identity. It is a new form. The perceptual experience of blue is qualitatively different from the perceptual experience of yellow. Meaning they affect the viewer in particular and different ways. Hues, specifically, are indeed highly differentiated and are clear species within the genus of color. An artist mixing blue and yellow to create green does not thereby prove that blue and yellow are not real, distinct colors—only that intermediates are possible. Likewise, it is no business of the taxonomer, which calls some species and others variations, to negate the realness of any of these separate groups and count them as arbitrary and nominal. If colors—which exist on a continuous spectrum of wavelengths—still exhibit qualitative differences, then Darwin’s assumption that ALL biological features exist only on quantitative gradients becomes questionable.

    However, Darwin has done this very thing, representing different kinds of structures with different developmental origins and functional architectures as a mere spectrum with no distinct affections or purposes. Darwin needs variation to be infinitely plastic, but what does he say to real biological constraints? Is it ever hard to tell the difference between a plant and an animal? A beak from fangs? A feather from fur? A nail from a claw? A leaf from a pine needle? What if body plans have inherent organizational logic that resists certain transformations? He is treating organisms like clay that can be molded into any form, but what if they are more like architectural structures with load-bearing walls? Darwin is missing good answers to these concerns. All of which need answers in order to call the Argument from Difference in Degree sound or convincing. 

    This critique does not diminish Darwin’s achievement in proposing a naturalistic mechanism for adaptation. Instead, it highlights the philosophical assumptions embedded in his leap from observable variation to universal common descent. Assumptions that, in 1859, lacked the mechanistic grounding that would make such extrapolation scientifically secure.

  • The Agnostic Nature of Phylogenetic Trees and Nested Hierarchies

    The Agnostic Nature of Phylogenetic Trees and Nested Hierarchies

    The evidence typically presented as definitive proof for the theory of common descent, the nested hierarchy of life and genetic/trait similarities, is fundamentally agnostic. This is because evolutionary theory, in its broad explanatory power, can be adapted to account for virtually any observed biological pattern post-hoc, thereby undermining the claim that these patterns represent unique or strong predictions of common descent over alternative models, such as common design.

    I. The Problematic Nature of “Prediction” in Evolutionary Biology

    1. Strict Definition of Scientific Prediction: A true scientific prediction involves foretelling a specific, unobserved phenomenon before its discovery. It is not merely explaining an existing observation or broadly expecting a general outcome.
    2. Absence of Specific Molecular Predictions:
      • Prior to the molecular biology revolution (pre-1950s/1960s), no scientist explicitly predicted the specific molecular similarity of DNA sequences across diverse organisms, the precise double-helix structure, or the near-universal genetic code. These were empirical discoveries, not pre-existing predictions.
      • Evolutionary explanations for these molecular phenomena (e.g., the “frozen accident” hypothesis for the universal genetic code) were formulated after the observations were made, rendering them post-hoc explanations rather than predictive triumphs.
      • Interpreting broad conceptual statements from earlier evolutionary thinkers (like Darwin’s “one primordial form”) as specific molecular predictions is an act of “eisegesis”—reading meaning into the text—rather than drawing direct, testable predictions from it. A primordial form does not necessitate universal code, universal protein sequences, universal logic, or universal similarity.

    II. The Agnosticism of the Nested Hierarchy

    1. The Nested Hierarchy as an Abstract Pattern: The observation that life can be organized into a nested hierarchy (groups within groups, e.g., species within genera, genera within families) is an abstract pattern of classification. This pattern existed and was recognized (e.g., by Linnaeus) long before Darwin’s theory of common descent.
    2. Compatibility with Common Design: A designer could, for various good reasons (e.g., efficiency, aesthetic coherence, reusability of components, comprehensibility), choose to create life forms that naturally fall into a nested hierarchical arrangement. Therefore, the mere existence of this abstract pattern does not uniquely or preferentially support common descent over a common design model.
    3. Irrelevance of Molecular “Details” for this Specific Point: While specific molecular “details” (such as shared pseudogenes, endogenous retroviruses, or chromosomal fusions) are often cited as evidence for common descent, these are arguments about the mechanisms or specific content of the nested hierarchy. These are not agnostic and can be debated fruitfully. However, they do not negate the fundamental point that the abstract pattern of nestedness itself remains agnostic, as it could be produced by either common descent or common design.

    III. Evolutionary Theory’s Excessive Explanatory Flexibility (Post-Hoc Rationalization)

    1. Fallacy of Affirming the Consequent: The logical structure “If evolutionary theory (Y) is true, then observation (X) is expected” does not logically imply “If observation (X) is true, then evolutionary theory (Y) must be true,” especially if the theory is so flexible that it can explain almost any X.
    2. Capacity to Account for Contradictory or Diverse Outcomes:
      • Genetic Similarity: Evolutionary theory could equally well account for a model with no significant genetic similarity between organisms (e.g., if different biochemical pathways or environmental solutions were randomly achieved, or if genetic signals blurred too quickly over time). For example, a world with extreme porportions of horizontal gene transfer (as seen in prokaryotic and rare eukaryotic cells)
      • Phylogenetic Branching: The theory is flexible enough to account for virtually any observed phylogenetic branching pattern. If, for instance, humans were found to be more genetically aligned with pigs than with chimpanzees, evolutionary theory would simply construct a different tree and provide a new narrative of common ancestry. This flexability puts a wedge in any measure of predictability claimed by the theory.
      • “Noise” in Data: If genetic data were truly “noise” (random and unpatterned), evolutionary theory could still rationalize this by asserting that “no creator would design that way, and randomness fully accounts for it,” thus always providing an explanation regardless of the pattern. In fact, a noise pattern is perhaps one of the few patterns better explained by random physical processes. Why would a designer, who has intentionality, create in such a slapdash way?
      • Convergence vs. Divergence: The theory’s ability to explain both convergent evolution (morphological similarity without close genetic relatedness) and divergent evolution (genetic differences leading to distinct forms) should imediately signal red-flags, as this is a telltale sign of a post-hoc fitting of observations rather than a result of specific prediction.
        • To illustrate this point, Let’s imagine we have seven distinct traits (A, B, C, D, E, F, G) and five hypothetical populations of creatures (P1-P5), each possessing a unique combination of these traits. For example, P1 has {A, B, C}, P2 has {A, D, E}, P3 has {A, F, G}, P4 has {B, D, F}, and P5 has {E, G}. When examining this distribution, we can construct a plausible “evolutionary story.” Trait ‘A’, present in P1, P2, and P3, could be identified as a broadly ancestral trait. P1 might be an early branch retaining traits B and C, while P2 and P3 diversified by gaining D/E and F/G respectively.
        • However, the pattern becomes more complex with populations like P4 and P5. P4’s mix of traits {B, D, F} suggests it shares B with P1, D with P2, and F with P3. An evolutionary narrative would then employ concepts like trait loss (e.g., B being lost in P2/P3/P5’s lineage), convergent evolution (e.g., F evolving independently in P4 and P3), or complex branching patterns. Similarly, P5’s {E, G} would be explained by inheriting E from P2 and G from P3, while also undergoing significant trait loss (A, B, C, D, F).
        • And this is the crux of the argument, given any observed distribution of traits, evolutionary theory’s flexible set of explanatory mechanisms—including common ancestry, trait gain, trait loss, and convergence—can always construct a coherent historical narrative. This ability to fit diverse patterns post-hoc renders the mere existence of a nested hierarchy, disconnected from specific underlying molecular details, as agnostic evidence for common descent over other models like common design.

    IV. Challenges to Specific Evolutionary Explanations and Assumptions

    1. Conservation of the Genetic Code:
      • The claim that the genetic code must remain highly conserved post-LUCA due to “catastrophic fitness consequences” of change is an unsubstantiated assumption. Granted, it could be true, but one can imagine plausible scenarios which could demonstrate exceptions.
      • Further, evolutionary theory already postulates radical changes, including the very emergence of complex systems “from scratch” during abiogenesis. If such fundamental transformations are possible, then the notion that a “new style of codon” is impossible over billions of years, even via incremental “patches and updates,” appears inconsistent.
      • Laboratory experiments that successfully engineer organisms to incorporate unnatural amino acids demonstrate the inherent malleability of the genetic code. Yet no experiment has demonstrate abiogenesis, a much more implausible event with less evolutionary time to play with. Why limit the permissible improbable things arbitrarily?
      • There is no inherent evolutionary reason to expect a single, highly conserved “language” for the genetic code; if information can be created through evolutionary processes, then multiple distinct solutions should be the rule.
    2. Functionality of “Junk” DNA and Shared Imperfections:
      • The assertion that elements like pseudogenes and endogenous retroviruses (ERVs) are “non-functional” or “mistakes” is often an “argument from ignorance” or an “anti-God/atheism-of-the-gaps” fallacy. Much of the genome’s function is still unknown, and many supposedly “non-functional” elements are increasingly found to have regulatory or other biological roles. For instance, see my last article on the DDX11L2 “pseudo” gene which operates as a regulatory element including as a secondary promoter.
      • If these elements are functional, their homologous locations are easily explained by a common design model, where a designer reuses functional components across different creations.
      • The “functionality” of ERVs, for instance, is often downplayed in arguments for common descent, despite their known roles in embryonic development, antiviral defense, and regulation, thereby subtly shifting the goalposts of the argument.
    3. Probabilities of Gene Duplication and Fusion:
      • The probability assigned to beneficial gene duplications and fusions (which are crucial for creating new genetic information and structures) seems inconsistently high when compared to the low probability assigned to the evolution of new codon styles. If random copying errors can create functional whole genes or fusions, then the “impossibility” of a new codon style seems a little arbitrary.

    Conclusion:

    The overarching argument is that while common descent can certainly explain the observed patterns in biology, its explanatory power often relies on post-hoc rationalization and a flexibility that allows it to account for almost any outcome. This diminishes the distinctiveness and predictive strength of the evidence, leaving it ultimately agnostic when compared to alternative models that can also account for the same observations through different underlying mechanisms.

  • Evidence for an Active Alternative Promoter in the Human DDX11L2 Gene

    Evidence for an Active Alternative Promoter in the Human DDX11L2 Gene

    Abstract

    The human genome contains numerous regulatory elements that control gene expression, including canonical and alternative promoters. While DDX11L2 is annotated as a pseudogene, its functional relevance in gene regulation has been a subject of interest. This study leverages publicly available genomic data from the UCSC Genome Browser, integrating information from the ENCODE project and ReMap database, to investigate the transcriptional activity within a specific intronic region of the DDX11L2 gene (chr2:113599028-113603778, hg38 assembly). Our analysis reveals the co-localization of key epigenetic marks, candidate cis-regulatory elements (cCREs), and RNA Polymerase II binding, providing robust evidence for an active alternative promoter within this region. These findings underscore the complex regulatory landscape of the human genome, even within annotated pseudogenes.

    1. Introduction

    Gene expression is a tightly regulated process essential for cellular function, development, and disease. A critical step in gene expression is transcription initiation, primarily mediated by RNA Polymerase II (Pol II) in eukaryotes. Transcription initiation typically occurs at promoter regions, which are DNA sequences located upstream of a gene’s coding sequence. However, a growing body of evidence indicates the widespread use of alternative promoters, which can initiate transcription from different genomic locations within or outside of a gene’s canonical promoter, leading to diverse transcript isoforms and complex regulatory patterns [1].

    The DDX11L2 gene, located on human chromosome 2, is annotated as a DEAD/H-box helicase 11 like 2 pseudogene. Pseudogenes are generally considered non-functional copies of protein-coding genes that have accumulated mutations preventing their translation into functional proteins. Despite this annotation, some pseudogenes have been found to play active regulatory roles, for instance, by producing non-coding RNAs or acting as cis-regulatory elements [2]. Previous research has suggested the presence of an active promoter within an intronic region of DDX11L2, often discussed in the context of human chromosome evolution [3].

    This study aims to independently verify the transcriptional activity of this specific intronic region of DDX11L2 by analyzing comprehensive genomic and epigenomic datasets available through the UCSC Genome Browser. We specifically investigate the presence of key epigenetic hallmarks of active promoters, the classification of cis-regulatory elements, and direct evidence of RNA Polymerase II binding.

    2. Materials and Methods

    2.1 Data Sources

    Genomic and epigenomic data were accessed and visualized using the UCSC Genome Browser (genome.ucsc.edu), utilizing the Human Genome assembly hg38. The analysis focused on the genomic coordinates chr2:113599028-113603778, encompassing the DDX11L2 gene locus.

    The following data tracks were enabled and examined in detail:

    ENCODE Candidate cis-Regulatory Elements (cCREs): This track integrates data from multiple ENCODE assays to classify genomic regions based on their regulatory potential. The “full” display mode was selected to visualize the color-coded classifications (red for promoter-like, yellow for enhancer-like, blue for CTCF-bound) [4].

    Layered H3K27ac: This track displays ChIP-seq signal for Histone H3 Lysine 27 acetylation, a histone modification associated with active promoters and enhancers. The “full” display mode was used to visualize peak enrichment [5].

    ReMap Atlas of Regulatory Regions (RNA Polymerase II ChIP-seq): This track provides a meta-analysis of transcription factor binding sites from numerous ChIP-seq experiments. The “full” display mode was selected, and the sub-track specifically for “Pol2” (RNA Polymerase II) was enabled to visualize its binding profiles [6].

    DNase I Hypersensitivity Clusters: This track indicates regions of open chromatin, which are accessible to regulatory proteins. The “full” display mode was used to observe DNase I hypersensitive sites [4].

    GENCODE Genes and RefSeq Genes: These tracks were used to visualize the annotated gene structure of DDX11L2, including exons and introns.

    2.2 Data Analysis

    The analysis involved visual inspection of the co-localization of signals across the enabled tracks within the DDX11L2 gene region. Specific attention was paid to the first major intron, where previous studies have suggested an alternative promoter. The presence and overlap of red “Promoter-like” cCREs, H3K27ac peaks, and Pol2 binding peaks were assessed as indicators of active transcriptional initiation. The names associated with the cCREs (e.g., GSE# for GEO accession, transcription factor, and cell line) were noted to understand the experimental context of their classification.

    3. Results

    Analysis of the DDX11L2 gene locus on chr2 (hg38) revealed consistent evidence supporting the presence of an active alternative promoter within its first intron.

    3.1 Identification of Promoter-like cis-Regulatory Elements:

    The ENCODE cCREs track displayed multiple distinct red bars within the first major intron of DDX11L2, specifically localized around chr2:113,601,200 – 113,601,500. These red cCREs are computationally classified as “Promoter-like,” indicating a high likelihood of promoter activity based on integrated epigenomic data. Individual cCREs were associated with specific experimental identifiers, such as “GSE46237.TERF2.WI-38VA13,” “GSE102884.SMC3.HeLa-Kyoto_WAPL_PDS-depleted,” and “GSE102884.SMC3.HeLa-Kyoto_PDS5-depleted.” These labels indicate that the “promoter-like” classification for these regions was supported by ChIP-seq experiments targeting transcription factors like TERF2 and SMC3 in various cell lines (WI-38VA13, HeLa-Kyoto, and HeLa-Kyoto under specific depletion conditions).

    3.2 Enrichment of Active Promoter Histone Marks:

    A prominent peak of H3K27ac enrichment was observed in the Layered H3K27ac track. This peak directly overlapped with the cluster of red “Promoter-like” cCREs, spanning approximately chr2:113,601,200 – 113,601,700. This strong H3K27ac signal is a hallmark of active regulatory elements, including promoters.

    3.3 Direct RNA Polymerase II Binding:

    Crucially, the ReMap Atlas of Regulatory Regions track, specifically the sub-track for RNA Polymerase II (Pol2) ChIP-seq, exhibited a distinct peak that spatially coincided with both the H3K27ac enrichment and the “Promoter-like” cCREs in the DDX11L2 first intron. This direct binding of Pol2 is a definitive indicator of transcriptional machinery engagement at this site.

    3.4 Open Chromatin State:

    The presence of active histone marks and Pol2 binding strongly implies an open chromatin configuration. Examination of the DNase I Hypersensitivity Clusters track reveals a corresponding peak, further supporting the accessibility of this region for transcription factor binding and initiation.

    4. Discussion

    The integrated genomic data from the UCSC Genome Browser provides compelling evidence for an active alternative promoter within the first intron of the human DDX11L2 gene. The co-localization of “Promoter-like” cCREs, robust H3K27ac signals, and direct RNA Polymerase II binding collectively demonstrates that this region is actively engaged in transcriptional initiation.

    The classification of cCREs as “promoter-like” (red bars) is based on a sophisticated integration of multiple ENCODE assays, reflecting a comprehensive biochemical signature of active promoters. The specific experimental identifiers associated with these cCREs (e.g., ERG, TERF2, SMC3 ChIP-seq data) highlight the diverse array of transcription factors that can bind to and contribute to the regulatory activity of a promoter. While ERG, TERF2, and SMC3 are not RNA Pol II itself, their presence at this locus, in conjunction with Pol II binding and active histone marks, indicates a complex regulatory network orchestrating transcription from this alternative promoter.

    The strong H3K27ac peak serves as a critical epigenetic signature, reinforcing the active state of this promoter. H3K27ac marks regions of open chromatin that are poised for, or actively undergoing, transcription. Its direct overlap with Pol II binding further strengthens the assertion of active transcription initiation.

    The direct observation of RNA Polymerase II binding is the most definitive evidence for transcriptional initiation. Pol II is the core enzyme responsible for synthesizing messenger RNA (mRNA) and many non-coding RNAs. Its presence at a specific genomic location signifies that the cellular machinery for transcription is assembled and active at that site.

    The findings are particularly interesting given that DDX11L2 is annotated as a pseudogene. This study adds to the growing body of literature demonstrating that pseudogenes, traditionally considered genomic “fossils,” can acquire or retain functional regulatory roles, including acting as active promoters for non-coding RNAs or influencing the expression of neighboring genes [2]. The presence of an active alternative promoter within DDX11L2 suggests a more intricate regulatory landscape than implied by its pseudogene annotation alone.

    5. Conclusion

    Through the integrated analysis of ENCODE and ReMap data on the UCSC Genome Browser, this study provides strong evidence that an intronic region within the human DDX11L2 gene functions as an active alternative promoter. The co-localization of “Promoter-like” cCREs, high H3K27ac enrichment, and direct RNA Polymerase II binding collectively confirms active transcriptional initiation at this locus. These findings contribute to our understanding of the complex regulatory architecture of the human genome and highlight the functional potential of regions, such as pseudogenes, that may have been previously overlooked.

    References

    [1] Carninci P. and Tagami H. (2014). The FANTOM5 project and its implications for mammalian biology. F1000Prime Reports, 6: 104.

    [2] Poliseno L. (2015). Pseudogenes: Architects of complexity in gene regulation. Current Opinion in Genetics & Development, 31: 79-84.

    [3] Tomkins J.P. (2013). Alleged Human Chromosome 2 “Fusion Site” Encodes an Active DNA Binding Domain Inside a Complex and Highly Expressed Gene—Negating Fusion. Answers Research Journal, 6: 367–375. (Note: While this paper was a starting point, the current analysis uses independent data for verification).

    [4] ENCODE Project Consortium. (2012). An integrated encyclopedia of DNA elements in the human genome. Nature, 489(7414): 57–74.

    [5] Rada-Iglesias A., et al. (2011). A unique chromatin signature identifies active enhancers and genes in human embryonic stem cells. Nature Cell Biology, 13(9): 1003–1013.

    [6] Chèneby J., et al. (2018). ReMap 2018: an updated atlas of regulatory regions from an integrative analysis of DNA-binding ChIP-seq experiments. Nucleic Acids Research, 46(D1): D267–D275.

  • Chromosome 2 Fusion: Evidence Out Of Thin Air?

    Chromosome 2 Fusion: Evidence Out Of Thin Air?

    The story is captivating and frequently told in biology textbooks and popular science: humans possess 46 chromosomes while our alleged closest relatives, chimpanzees and other great apes, have 48. The difference, evolutionists claim, is due to a dramatic event in our shared ancestry – the fusion of two smaller ape chromosomes to form the large human Chromosome 2. This “fusion hypothesis” is often presented as slam-dunk evidence for human evolution from ape-like ancestors. But when we move beyond the narrative and scrutinize the actual genetic data, does the evidence hold up? A closer look suggests the case for fusion is far from conclusive, perhaps even bordering on evidence conjured “out of thin air.”

    The fusion model makes specific predictions about what we should find at the junction point on Chromosome 2. If two chromosomes, capped by protective telomere sequences, fused end-to-end, we’d expect to see a characteristic signature: the telomere sequence from one chromosome (repeats of TTAGGG) joined head-to-head with the inverted telomere sequence from the other (repeats of CCCTAA). These telomeric repeats typically number in the thousands at chromosome ends.  

    The Missing Telomere Signature

    When scientists first looked at the proposed fusion region (locus 2q13), they did find some sequences resembling telomere repeats (IJdo et al., 1991). This was hailed as confirmation. However, the reality is much less convincing than proponents suggest.

    Instead of thousands of ordered repeats forming a clear TTAGGG…CCCTAA structure, the site contains only about 150 highly degraded, degenerate telomere-like sequences scattered within an ~800 base pair region. Searching a much larger 64,000 base pair region yields only 136 instances of the core TTAGGG hexamer, far short of a telomere’s structure. Crucially, the orientation is often wrong – TTAGGG motifs appear where CCCTAA should be, and vice-versa. This messy, sparse arrangement hardly resembles the robust structure expected from even an ancient, degraded fusion event.

    Furthermore, creationist biologist Dr. Jeffrey Tomkins discovered that this alleged fusion site is not merely inactive debris; it falls squarely within a functional region of the DDX11L2 gene, likely acting as a promoter or regulatory element (Tomkins, 2013). Why would a supposedly non-functional scar from an ancient fusion land precisely within, and potentially regulate, an active gene? This finding severely undermines the idea of it being simple evolutionary leftovers.

    The Phantom Centromere

    A standard chromosome has one centromere. Fusing two standard chromosomes would initially create a dicentric chromosome with two centromeres – a generally unstable configuration. The fusion hypothesis thus predicts that one of the original centromeres must have been inactivated, leaving behind a remnant or “cryptic” centromere on Chromosome 2.  

    Proponents point to alpha-satellite DNA sequences found around locus 2q21 as evidence for this inactivated centromere, citing studies like Avarello et al. (1992) and the chromosome sequencing paper by Hillier et al. (2005). But this evidence is weak. Alpha-satellite DNA is indeed common near centromeres, but it’s also found abundantly elsewhere throughout the genome, performing various functions.  

    The Avarello study, conducted before full genome sequencing, used methods that detected alpha-satellite DNA generally, not functional centromeres specifically. Their results were inconsistent, with the signal appearing in less than half the cells examined – hardly the signature of a definite structure. Hillier et al. simply noted the presence of alpha-satellite tracts, but these specific sequences are common types found on nearly every human chromosome and show no unique similarity or phylogenetic clustering with functional centromere sequences. There’s no compelling structural or epigenetic evidence marking this region as a bona fide inactivated centromere; it’s simply a region containing common repetitive DNA.

    Uniqueness and the Mutation Rate Fallacy

    Adding to the puzzle, the specific short sequence often pinpointed as the precise fusion point isn’t unique. As can be demonstrated using the BLAT tool, this exact sequence appears on human Chromosomes 7, 19, and the X and Y chromosomes. If this sequence is the unique hallmark of the fusion event, why does it appear elsewhere? The evolutionary suggestion that these might be remnants of other, even more ancient fusions is pure speculation without a shred of supporting evidence.

    The standard evolutionary counter-argument to the lack of clear telomere and centromere signatures is degradation over time. “The fusion happened millions of years ago,” the reasoning goes, “so mutations have scrambled the evidence.” However, this explanation crumbles under the weight of actual mutation rates.

    Using accepted human mutation rate estimates (Nachman & Crowell, 2000) and the supposed 6-million-year timeframe since divergence from chimps, we can calculate that the specific ~800 base pair fusion region would be statistically unlikely to have suffered even one mutation during that entire period! The observed mutation rate is simply far too low to account for the dramatic degradation required to turn thousands of pristine telomere repeats and a functional centromere into the sequences we see today. Ironically, the known mutation rate argues against the degradation explanation needed to salvage the fusion hypothesis.

    Common Design vs. Common Ancestry

    What about the general similarity in gene order (synteny) between human Chromosome 2 and chimpanzee chromosomes 2A and 2B? While often presented as strong evidence for fusion, similarity does not automatically equate to ancestry. An intelligent designer reusing effective plans is an equally valid, if not better, explanation for such similarities. Moreover, the “near identical” claim is highly exaggerated; large and significant differences exist in gene content, control regions, and overall size, especially when non-coding DNA is considered (Tomkins, 2011, suggests overall similarity might be closer to 70%). This makes sense when considering that coding regions function to provide the recepies for proteins (which similar life needs will share similarly).

    Conclusion: A Story Of Looking for Evidence

    When the genetic data for human Chromosome 2 is examined without the pre-commitment to an evolutionary narrative, the evidence for the fusion event appears remarkably weak. So much so that it begs the question, was this a mad-dash to explain the blatent differences in the genomes of Humans and Chimps? The expected telomere signature is absent, replaced by a short, jumbled sequence residing within a functional gene region. The evidence for a second, inactivated centromere relies on the presence of common repetitive DNA lacking specific centromeric features. The supposed fusion sequence isn’t unique, and known mutation rates are woefully insufficient to explain the degradation required by the evolutionary model over millions of years.

    The chromosome 2 fusion story seems less like a conclusion drawn from compelling evidence and more like an interpretation imposed upon ambiguous data to fit a pre-existing belief in human-ape common ancestry. The scientific data simply does not support the narrative. Perhaps it’s time to acknowledge that the “evidence” for this iconic fusion event may indeed be derived largely “out of thin air.”

    References:

  • Examining Claims of Macroevolution and Irreducible Complexity:

    Examining Claims of Macroevolution and Irreducible Complexity:

    A Creationist Perspective

    The debate surrounding the origin and diversification of life continues, with proponents of neo-Darwinian evolution often citing observed instances of speciation and adaptations as evidence for macroevolution and the gradual development of complex biological systems. A recent “MEGA POST” on Reddit’s r/DebateEvolution presented several cases purported to demonstrate these processes, challenging the creationist understanding of life’s history. This article will examine these claims from a young-Earth creationist viewpoint.

    The original post defined key terms, stating, “Macroevolution ~ variations in heritable traits in populations with multiple species over time. Speciation marks the start of macroevolution.” However, creationists distinguish between microevolution – variation and speciation within a created kind – and macroevolution – the hypothetical transition between fundamentally different kinds of organisms. While the former is observable and acknowledged, the latter lacks empirical support and the necessary genetic mechanisms.

    Alleged Cases of Macroevolution:

    The post presented eleven cases as evidence of macroevolution.

    1. Lizards evolving placentas: The observation of reproductive isolation in Zootoca vivipara with different modes of reproduction was highlighted. The author noted, “(This is probably my favourite example of the bunch, as it shows a highly non-trivial trait emerging, together with isolation, speciation and selection for the new trait to boot.)” From a creationist perspective, the development of viviparity within lizards likely involves the expression or modification of pre-existing genetic information within the lizard kind. This adaptation and speciation do not necessitate the creation of novel genetic information required for a transition to a different kind of organism.

    2. Fruit flies feeding on apples: The divergence of the apple maggot fly (Rhagoletis pomonella) into host-specific groups was cited as sympatric speciation. This adaptation to different host plants and the resulting reproductive isolation are seen as microevolutionary changes within the fruit fly kind, utilizing the inherent genetic variability.  

    3. London Underground mosquito: The adaptation of Culex pipiens f. molestus to underground environments was presented as allopatric speciation. The observed physiological and behavioral differences, along with reproductive isolation, are consistent with diversification within the mosquito kind due to environmental pressures acting on the existing gene pool.  

    4. Multicellularity in Green Algae: The lab observation of obligate multicellularity in Chlamydomonas reinhardtii under predation pressure was noted. The author stated this lays “the groundwork for de novo multicellularity.” While this is an interesting example of adaptation, the transition from simple coloniality to complex, differentiated multicellularity, as seen in plants and animals, requires a significant increase in genetic information and novel developmental pathways. The presence of similar genes across different groups could point to a common designer employing similar modules for diverse functions.  

    5. Darwin’s Finches, revisited 150 years later: Speciation in the “Big Bird lineage” due to environmental pressures was discussed. This classic example of adaptation and speciation on the Galapagos Islands demonstrates microevolutionary changes within the finch kind, driven by natural selection acting on existing variations in beak morphology.  

    6 & 7. Salamanders and Greenish Warblers as ring species: These examples of geographic variation leading to reproductive isolation were presented as evidence of speciation. While ring species illustrate gradual divergence, the observed changes occur within the salamander and warbler kinds, respectively, and do not represent transitions to fundamentally different organisms.  

    8. Hybrid plants and polyploidy: The formation of Tragopogon miscellus through polyploidy was cited as rapid speciation. The author noted that crossbreeding “exploits polyploidy…to enhance susceptibility to selection for desired traits.” Polyploidy involves the duplication of existing chromosomes and the combination of genetic material from closely related species within the plant kingdom. This mechanism facilitates rapid diversification but does not generate the novel genetic information required for macroevolutionary transitions.  

    9. Crocodiles and chickens growing feathers: The manipulation of gene expression leading to feather development in these animals was discussed. The author suggested this shows “how birds are indeed dinosaurs and descend within Sauropsida.” Creationists interpret the shared genetic toolkit and potential for feather development within reptiles and birds as evidence of a common design within a broader created kind, rather than a direct evolutionary descent in the Darwinian sense.  

    10. Endosymbiosis in an amoeba: The observation of a bacterium becoming endosymbiotic within an amoeba was presented as analogous to the origin of organelles. Creationists propose that organelles were created in situ with their host cells, designed for symbiotic relationships from the beginning. The observed integration is seen as a function of this initial design.

    11. Eurasian Blackcap: The divergence in migratory behavior and morphology leading towards speciation was highlighted. This represents microevolutionary adaptation within the bird kind in response to environmental changes.

    Addressing “Irreducible Complexity”:

    The original post also addressed the concept of irreducible complexity with five counter-examples.

    1. E. Coli Citrate Metabolism in the LTEE: The evolution of citrate metabolism was presented as a refutation of irreducible complexity. The author noted that this involved “gene duplication, and the duplicate was inserted downstream of an aerobically-active promoter.” While this demonstrates the emergence of a new function, it occurred within the bacterial kind and involved the modification and duplication of existing genetic material. Therefore, is no evidence here to suggest an evolutionary pathway for the origin of citrate metabolism.

    2. Tetherin antagonism in HIV groups M and O: The different evolutionary pathways for overcoming tetherin resistance were discussed. Viruses, with their rapid mutation rates and unique genetic mechanisms, present a different case study than complex cellular organisms. This is not analogous in the slightest.

    3. Human lactose tolerance: The evolution of lactase persistence was presented as a change that is “not a loss of regulation or function.” This involves a regulatory mutation affecting the expression of an existing gene within the human genome. Therefore, it’s not a gain either. This is just a semantic game.

    4. Re-evolution of bacterial flagella: The substitution of a key regulatory protein for flagellum synthesis was cited. The author noted this is “an incredibly reliable two-step process.” While this demonstrates the adaptability of bacterial systems, the flagellum itself remains a complex structure with numerous interacting components – none of said components have gained or lost the cumulative necessary functions.

    5. Ecological succession: The development of interdependent ecosystems was presented as a challenge to irreducible complexity. However, ecological succession describes the interactions and development of communities of existing organisms, not the origin of the complex biological systems within those organisms.  

    Conclusion:

    While the presented cases offer compelling examples of adaptation and speciation, we interpret these observations as occurring within the boundaries of created kinds, utilizing the inherent genetic variability designed within them. These examples do not provide conclusive evidence for macroevolution – the transition between fundamentally different kinds of organisms – nor do they definitively refute the concept of irreducible complexity in the origin of certain biological systems. The fact that so many of these are, if not neutral, loss-of-function or loss-of-information mutations creates a compelling case for creation as the inference to the best explanation. The creationist model, grounded in the historical robustness of the Biblical account and supported by scientific evidence (multiple cross-disciplinary lines), offers a coherent alternative explanation for the diversity and complexity of life. As the original post concluded,

    “if your only response to the cases of macroevolution are ‘it’s still a lizard’, ‘it’s still a fly you idiot’ etc, congrats, you have 1) sorely missed the point and 2) become an evolutionist now!”

    However, the point is not that change doesn’t occur (we expect that on our model), but rather the kind and extent of that change, which, from a creationist perspective, remains within divinely established explanatory boundaries of the creation model and contradicts a universal common descent model.

    References:

    Teixeira, F., et al. (2017). The evolution of reproductive isolation during a rapid adaptive radiation in alpine lizards. Proceedings of the National Academy of Sciences, 114(12), E2386-E2393. https://doi.org/10.1073/pnas.1635049100

    Fonseca, D. M., et al. (2023). Rapid Speciation of the London Underground Mosquito Culex pipiens molestus. ResearchGate. https://doi.org/10.13140/RG.2.2.23813.22247

    Grant, P. R., & Grant, B. R. (2017). Texas A&amp;M professor’s study of Darwin’s finches reveals species can evolve in two generations. Texas A&amp;M Today. https://stories.tamu.edu/news/2017/12/01/texas-am-professors-study-of-darwins-finches-reveals-species-can-evolve-in-two-generations/

    Feder, J. L., et al. (1997). Allopatric host race formation in sympatric hawthorn maggot flies. Proceedings of the National Academy of Sciences, 94(15), 7761-7766. https://doi.org/10.1073/pnas.94.15.7761

    Tishkoff, S. A., et al. (2013). Convergent adaptation of human lactase persistence in Africa and Europe. Nature Genetics, 45(3), 233-240. https://doi.org/10.1038/ng.2529 (Note: While the URL provided redirects to PMC, the original publication is in Nature Genetics. I have cited the primary source.)

  • Tiny Water Fleas, Big Questions About Evolution

    Tiny Water Fleas, Big Questions About Evolution

    Scientists recently spent a decade tracking the genetics of a tiny water creature called Daphnia pulex, a type of water flea. What they found is stirring up a lot of questions about how evolution really works.  

    Imagine you’re watching a group of people over ten years, noting every little change in their appearance. Now, imagine doing that with the genetic code of hundreds of water fleas. That’s essentially what these researchers did. They looked at how the frequencies of different versions of genes (alleles) changed from year to year.

    What they discovered was surprising. On average, most of the genetic variations they tracked didn’t seem to be under strong selection at all. In other words, most of the time, the different versions of genes were more or less equally successful. It’s like watching people over ten years and finding that, on average, nobody’s hair color really changed much.

    However, there was a catch. Even though the average trend was “no change,” there were a lot of ups and downs from year to year. One year, a particular gene version might be slightly more common, and the next year, it might be slightly less common. This means that selective pressures—the forces that push evolution—were constantly changing.

    Think of it like the weather. One day it’s sunny, the next it’s rainy, but the average temperature over the year might be pretty mild. The researchers called this “fluctuating selection.”

    They also found that these genetic changes weren’t happening randomly across the whole genome. Instead, they were happening in small, linked groups of genes. These groups seemed to be working together, like little teams within the genome.  

    So, what does this all mean?

    Well, for one thing, it challenges the traditional idea of gradual, steady evolution via natural selection. If evolution were a slow, constant march forward, you’d expect to see consistent changes in gene frequencies over time being promoted by the environment. But that’s not what they found. Instead, they saw a lot of back-and-forth, with selection pressures constantly changing and equalizing at a net-zero.  

    From a design perspective, this makes a lot of sense. Instead of random changes slowly building up over millions of years, this data suggests that organisms are incredibly adaptable, designed to handle constant environmental shifts. The “teams” of linked genes working together look a lot like pre-programmed modules, ready to respond to whatever challenges the environment throws their way.

    The fact that most gene variations are “quasi-neutral,” meaning they don’t really affect survival on average, also fits with the idea of a stable, created genome. Rather than constantly evolving new features, organisms might be designed with a wide range of genetic options, ready to be used when needed.

    This study on tiny water fleas is a reminder that evolution is a lot more complex than we often think. It’s not just about random mutations and gradual changes. It’s about adaptability, flexibility, and a genome that’s ready for anything. And maybe, just maybe, it’s about design.

    (Based on: The genome-wide signature of short-term temporal selection)

  • The Limits of Evolution

    The Limits of Evolution

    Yesterday, a presentation by Dr. Rob Stadler took place on Dr. James Tour’s Youtube channel which has brought to light a compelling debate about the true extent of evolutionary capabilities. In their conversation, they delve into the levels of confidence in evolutionary evidence, revealing a stark contrast between observable, high-confidence microevolution and the extrapolated, low-confidence claims of macroevolutionary transitions. This distinction, which is based on the levels of evidence as understood in medical science, raises profound questions about the sufficiency of evolutionary mechanisms to explain the vast diversity of life.

    Dr. Stadler, author of “The Scientific Approach to Evolution,” presents a rigorous framework for evaluating scientific evidence. He outlines six criteria for high-confidence results: repeatability, direct measurability, prospectiveness, unbiasedness, assumption-free methodology, and reasonable claims. Applying these criteria to common evolutionary arguments, such as the fossil record, geographic distribution, vestigial organs, and comparative anatomy, Dr. Stadler reveals significant shortcomings. These lines of evidence, he argues, fall short of the high-confidence threshold. They are not repeatable, they cannot be directly measured, there is very little (if any) of predictive value , and most importantly they rely heavily on biased interpretation and assumption.

    However, the interview also highlights examples of high-confidence evolutionary studies. Experiments with E. coli bacteria, for instance, demonstrate the power of natural selection and mutation to drive small-scale changes within a population. These studies, repeatable and directly measurable, provide compelling evidence for microevolution. Yet, as Dr. Stadler emphasizes, extrapolating these observed changes to explain the origin of complex biological systems or the vast diversity of life is a leap of faith, not a scientific conclusion.

    The genetic differences between humans and chimpanzees further illustrate this point. While popular science often cites a 98% similarity, Dr. Stadler points out the significant differences, particularly in “orphan genes” and the regulatory functions of non-protein-coding DNA. These differences, he argues, challenge the notion of a simple, linear evolutionary progression.

    This aligns with the research of Dr. Douglas Axe, whose early work explored the probability of protein evolution. Axe’s findings suggest that the vast divergence between protein structures makes a common ancestor for all proteins highly improbable (Axe, 2000). This raises critical questions about the likelihood of orphan genes arising through random evolutionary processes alone, given the complexity and specificity of protein function.

    The core argument, as presented by Dr. Tour and Dr. Stadler, is not that evolution is entirely false. Rather, they contend that the high-confidence evidence supports only limited, small-scale changes, or microevolution. The leap to macroevolution, the idea that these small changes can accumulate to produce entirely new biological forms, appears to be a category error, based on our best evidence, and remains a low-confidence extrapolation.

    The video effectively presents case studies of evolution, demonstrating the observed limitations of evolutionary change. This evidence strongly suggests that evolutionary mechanisms are insufficient to account for the levels of diversity we observe today. The complexity of biological systems, the vast genetic differences between species, and the improbability of protein evolution challenge the core tenets of Neo-Darwinism and the Modern Synthesis.

    As Dr. Tour and Dr. Stadler articulate, a clear distinction must be made between observable, repeatable microevolution and the extrapolated, assumption-laden claims of macroevolution. While the former is supported by high-confidence evidence, the latter remains a subject of intense debate, demanding further scientific scrutiny.

    Works Cited

    • Tour, James, and Rob Stadler. “Evolution vs. Evidence: Are We Really 98% Chimp?” YouTube, uploaded by James Tour, https://www.youtube.com/watch?v=smTbYKJcnj8&t=2117s.
    • Axe, Douglas D. “Extreme functional sensitivity to conservative amino acid changes on enzyme exteriors.” Journal of Molecular Biology, vol. 301, no. 3, 2000, pp. 585-595.