Creation Questions

Tag: health

  • Introduction To Created Heterozygosity

    Introduction To Created Heterozygosity

    Introduction

    Evolution by natural selection is a foundational theory in biology, observable in bacteria developing resistance, finch beak size changes, and populations adapting to environments. These microevolution examples are experimentally verified and widely accepted.

    A deeper question persists: Are the mechanisms of random mutation and natural selection sufficient to explain not only the modification of existing biological structures, but also their original creation? Specifically, can the processes observed in generating variation within species account for the origin of entirely novel protein folds, enzymatic functions, and the fundamental molecular machinery of life?

    This essay addresses this question by systematically evaluating the proposed mechanisms for evolutionary innovation, identifying their constraints, and highlighting what appears to be a fundamental limit: the origin of complex protein architecture.

    Part I: The Mechanisms of Modification

    Gene Duplication: Copy, Paste, Edit

    The most commonly cited mechanism for evolutionary innovation is gene duplication. The logic is straightforward: when a gene is accidentally copied during DNA replication, the organism now has two versions. One copy maintains the original function (keeping the organism alive), while the redundant copy is “free” to mutate without immediate lethal consequences.

    In theory, this freed copy can acquire new functions through random mutation—a process called neofunctionalization. Over time, what was once a single-function gene becomes a gene family with diverse, related functions.

    This mechanism is real and well-documented. For instance, in “trio” studies (father, mother, child), we regularly see de novo copy number variations (CNVs). Based on this, we can trace gene families back through evolutionary history and see convincing evidence of duplication events. However, gene duplication has important limitations:

    Dosage sensitivity: Cells operate as finely tuned chemical systems. Doubling the amount of a protein often disrupts this balance, creating harmful or even lethal effects. The cell isn’t simply tolerant of extra copies—duplication frequently imposes an immediate cost.

    Subfunctionalization: Rather than one copy evolving a bold new function, duplicate genes more commonly undergo subfunctionalization—they degrade slightly and split the original function between them. What was once done by one gene is now accomplished by two, each doing part of the job. This adds genomic complexity but doesn’t create novel capabilities.

    The prerequisite problem: Most fundamentally, gene duplication requires a functional gene to already exist. It’s a “copy-paste-edit” mechanism. It can explain variations on a theme—how you get a family of related enzymes—but it cannot explain the origin of the first member of that family.

    Evo-Devo: Rewiring the Switches

    Evolutionary developmental biology (evo-devo) revealed something crucial: many major morphological changes don’t come from inventing new genes, but from rewiring when and where existing genes are expressed. Mutations in regulatory elements—the “switches” that control genes—can produce dramatic changes in body plans.

    A classic example: the difference between a snake and a lizard isn’t that snakes invented fundamentally new genes. Rather, mutations in regulatory regions altered the expression patterns of Hox genes (master developmental regulators), eliminating limb development while extending the body axis.

    This mechanism helps explain how evolution can produce dramatic morphological diversity without constantly inventing new molecular parts. But it has clear boundaries:

    The circuitry prerequisite: Regulatory evolution presupposes the existence of a sophisticated, modular regulatory network—the Hox genes themselves, enhancer elements, transcription factor binding sites. This network is enormously complex. Evo-devo explains how to rearrange the blueprint, but not where the drafting tools came from.

    Modification, not creation: You can turn genes on in new places, at new times, in new combinations. You can lose structures (snakes losing legs). But you cannot regulatory-mutate your way to a structure whose genetic basis doesn’t already exist. You’re rearranging existing parts, not forging new ones.

    Exaptation: Shifting Purposes

    Exaptation describes how traits evolved for one function can be co-opted for another. Feathers, possibly first used for insulation or display, were later recruited for flight. Swim bladders in fish became lungs in land vertebrates.

    This is an important concept for understanding evolutionary pathways—it explains how structures can be preserved and refined even when their ultimate function hasn’t yet emerged. But exaptation is a description of changing selective pressures, not a mechanism of generation. It tells us how a trait might survive intermediate stages, but not how the physical structure arose in the first place.

    Part II: The Hard Problem—De Novo Origins

    The mechanisms above all share a common feature: they are remixing engines. They shuffle, duplicate, rewire, and repurpose existing genetic material. This works brilliantly for generating diversity and adaptation. But it raises an unavoidable question: Where did the original material come from?

    This is where the inquiry becomes more challenging.

    De Novo Gene Birth: From Junk to Function?

    To tackle this question, we examine the hypothesis that new genes can arise from previously non-coding “junk” DNA—an idea central to de novo gene birth.

    One hypothesis is that non-coding DNA—sometimes called “junk DNA”—occasionally gets transcribed randomly. If a random mutation creates an open reading frame (a start codon, some codons, a stop codon), you might produce a random peptide. Perhaps, very rarely, this random peptide does something useful, and natural selection preserves and refines it.

    This mechanism has some support. We do see “orphan genes” in various lineages—genes with no clear homologs in related species, suggesting recent origin. When we examine these orphan genes, many are indeed simple: short, intrinsically disordered proteins with low expression levels.

    But here’s where we hit the toxicity filter—a fundamental physical constraint.

    The Toxicity Filter

    Protein synthesis is energetically expensive, consuming up to 75% of a growing cell’s energy budget. When a cell produces a protein, it’s making an investment. If that protein immediately misfolds and gets degraded by the proteasome, the cell has just run a futile cycle—burning energy to produce garbage.

    In a competitive environment (which is where natural selection operates), a cell wasting energy on useless proteins will be outcompeted by leaner, more efficient cells. This creates strong selection pressure against expressing random, non-functional sequences.

    It gets worse. Cells have a limited capacity for handling misfolded proteins. Chaperone proteins help fold new proteins correctly, and the proteasome system degrades those that fail. But these are finite resources. If a cell produces too many difficult-to-fold or misfolded proteins, it triggers the Unfolded Protein Response (UPR).

    The UPR is an emergency protocol. Initially, the cell tries to fix the problem—producing more chaperones, slowing translation. But if the stress is too severe, the UPR switches from “repair” to “abort”: the cell undergoes apoptosis (programmed cell death) to protect the organism.

    This creates a severe constraint: natural selection doesn’t just fail to reward complex random sequences—it actively punishes them. The toxicity filter eliminates complex precursors before they have a chance to be refined.

    The Result

    The “reservoir” of potentially viable de novo genes is therefore biased heavily toward simple, disordered, low-expression peptides. These can slip through because they don’t trigger the toxicity filters. They don’t misfold (because they don’t fold), and at low expression, they don’t drain significant resources.

    This explains the orphan genes we observe: simple, disordered, regulatory, or binding proteins. But it fails to explain the origin of complex, enzymatic machinery—proteins that require specific three-dimensional structures to catalyze reactions.

    Part III: The Valley of Death

    To understand why complex enzymatic proteins are so difficult to generate de novo, we need to examine what makes them different from simple disordered proteins.

    Two Types of Proteins

    Intrinsically Disordered Proteins (IDPs) are floppy, flexible chains. They’re rich in polar and charged amino acids (hydrophilic—“water-loving”). These amino acids are happy interacting with water, so the protein doesn’t collapse into a compact structure. IDPs are excellent for binding to other molecules (they can wrap around things) and for regulatory functions (they’re flexible switches). They’re also relatively safe—they don’t aggregate easily.

    Folded Proteins, by contrast, have a hydrophobic core. Water-hating amino acids cluster in the center of the protein, away from the surrounding water. This hydrophobic collapse creates a stable, specific three-dimensional structure. Folded proteins can do things IDPs cannot: precise catalysis requires holding a substrate molecule in exactly the right geometry, which requires a rigid, well-defined active site pocket.

    The problem is that the “recipe” for these two types of proteins is fundamentally different. You can’t gradually transition from one to the other without passing through a dangerous intermediate state.

    The Sticky Globule Problem

    Imagine trying to evolve from a safe IDP to a functional folded enzyme:

    1. Start: A disordered protein—polar amino acids, floppy, safe.
    2. Intermediate: As you mutate polar residues to hydrophobic ones, you don’t immediately get a nice folded structure. Instead, you get a partially hydrophobic chain—the worst of both worlds. These “sticky globules” are aggregation-prone. They clump together like glue, forming toxic aggregates.
    3. End: A properly folded protein with a hydrophobic core and stable structure

    The middle step—the sticky globule phase—is precisely what the toxicity filter eliminates most aggressively. These partially hydrophobic intermediates are the most dangerous type of protein for a cell.

    This creates what we might call the Valley of Death: a region of sequence space that is selected against so strongly that random mutation cannot cross it. To get from a safe disordered protein to a functional enzyme, you’d need to traverse this valley—but natural selection is actively pushing you back.

    Catalysis Requires Geometry

    There’s a second constraint. Catalysis—the acceleration of chemical reactions—almost always requires a precise three-dimensional pocket (an active site) that can:

    • Position the substrate molecule correctly.
    • Stabilize the transition state.
    • Shield the reaction from water (in many cases)

    A floppy disordered protein is excellent for binding (it can wrap around things), but terrible for catalysis. It lacks the rigid geometry needed to precisely orient molecules and stabilize reaction intermediates.

    This means the “functional gradient” isn’t smooth. You can evolve binding functions with IDPs. You can evolve regulatory functions. But to evolve enzymatic function, you need to cross the valley—and the valley actively resists crossing.

    Part IV: The Escape Route—And Its Implications

    There is one clear escape route from the Valley of Death: don’t cross it at all.

    Divergence from Existing Folds

    If you already have a stable folded protein—one with a hydrophobic core and a defined structure—you can modify it safely:

    1. Duplicate it: Now you have a redundant copy.
    2. Keep the core: The hydrophobic core (the “dangerous” part) stays conserved. This maintains structural stability.
    3. Mutate the surface: The active site is usually on flexible loops outside the core. Mutate these loops to change substrate specificity, reaction type, or regulation.

    This mechanism is well-documented. It’s how modern enzyme families diversify. You get proteins that are functionally very different (digesting different substrates, catalyzing different reactions) but structurally similar—variations on the same fold.

    Critically, you never cross the Valley of Death because you never dismantle the scaffold. You’re modifying an existing, stable structure, not building one from scratch.

    The Primordial Set

    This escape route, however, comes with a profound implication: it presupposes the fold already exists.

    If modern enzymatic diversity arises primarily through divergence from existing folds rather than de novo generation of new folds, where did those original folds come from?

    The empirical data suggest a striking answer: they arose very early, and there hasn’t been much architectural innovation since.

    When we examine protein structures across all domains of life, we don’t see a continuous spectrum of novel shapes appearing over evolutionary time. Instead, we see roughly 1,000-10,000 basic structural scaffolds (fold families) that appear again and again. A bacterial enzyme and a human enzyme performing completely different functions often share the same underlying fold—the same basic architectural plan.

    Comparative genomics pushes this pattern even further back. The vast majority of these fold families appear to have been present in LUCA—the Last Universal Common Ancestor—over 3.5 billion years ago.

    The implication is stark: evolution seems to have experienced a “burst” of architectural invention right at the beginning, and has spent the subsequent 3+ billion years primarily as a remixer and optimizer, not an architect of fundamentally new structures.

    Part V: The Honest Reckoning

    We can now reassess the original question: Are the mechanisms of mutation and natural selection sufficient to explain not just the modification of life, but its origination?

    What the Mechanisms Can Do

    The neo-Darwinian synthesis is extraordinarily powerful for explaining:

    • Optimization: Taking an existing trait and refining it
    • Diversification: Creating variations on existing themes
    • Adaptation: Adjusting populations to new environments
    • Loss: Eliminating unnecessary structures
    • Regulatory rewiring: Changing when and where genes are expressed

    These mechanisms are observed, experimentally verified, and sufficient to explain the vast majority of biological diversity we see around us.

    What the Mechanisms Struggle With

    The same mechanisms face severe constraints when attempting to explain:

    • The origin of novel protein folds: The Valley of Death makes de novo generation of complex, folded, enzymatic proteins implausible under cellular conditions.
    • The origin of the primordial set: The fundamental protein architectures that all modern life relies on
    • The origin of the cellular machinery: DNA replication, transcription, translation, and error correction systems that evolution requires to function

    A New Theory

    The constraints we’ve examined—the toxicity filter, the Valley of Death, the thermodynamics of protein folding—are not “research gaps” that might be closed with more data. They are physical constraints rooted in chemistry and bioenergetics.

    Modern evolutionary mechanisms are demonstrably excellent at working with existing complexity. They can shuffle it, optimize it, repurpose it, and elaborate on it in extraordinary ways. The diversity of life testifies to its power.

    But when we trace the mechanisms back to their foundation—when we ask how the original protein folds arose, how the first enzymatic machinery came to be—we encounter a genuine boundary.

    The thermodynamics that make de novo fold generation implausible today presumably existed 3.5 billion years ago as well. Perhaps early Earth conditions were radically different in ways that bypassed these constraints—different chemistry, mineral catalysts, an RNA world with different rules. Perhaps there are mechanisms we haven’t yet discovered or understood.

    But based on what we currently understand about the mechanisms of evolution and the physics of protein folding, the honest answer to “how did those original folds arise?” is:

    They didn’t.

    We need a new explanation that can account for the data. We have excellent, mechanistic explanations for how life diversifies and adapts. We have a clear understanding of the constraints that limit those mechanisms. And we have an unsolved problem at the foundation.

    The question remains open: not as a gap in data, but as a gap in mechanism. So what mechanism can account for genetic diversity?

    Part VI: A More Parsimonious Model

    For over a century, the primary explanation for the vast diversity of life on Earth has been the slow accumulation of mutations over millions of years, filtered by natural selection. However, there is another account of the origins of life that is often left unacknowledged and dismissed as pseudoscience. The concept is simple. We see information in the form of DNA, which is, by nature, a linguistic code. Codes require minds in our repeated and uniform experience. If our experience truly tells us that evolutionary mechanisms cannot account for information systems, as we’ve discovered through this inquiry, then it stands to reason that a design solution cannot rightly be said to be “off the table.

    However, there are many forms of design, so which one fits the data?

    The answer lies in a powerful, testable model known as Created Heterozygosity and Natural Processes (CHNP). This model suggests that a designer created organisms not as genetically uniform clones, but with pre-existing genetic diversity “front-loaded” into their genomes.

    Here is why Created Heterozygosity makes scientific sense.

    A common objection to any form of young-age design model is that two people cannot produce the genetic variation seen in seven billion humans today. Critics argue that we would be clones. However, this objection assumes Adam and Eve were genetically homozygous (having two identical DNA copies).

    If Adam and Eve were created with heterozygosity—meaning their two sets of chromosomes contained different versions of genes (alleles)—they could possess a massive amount of potential variation.

    The Power of Recombination

    We observe in biology that parents pass on traits through recombination and gene conversion. These processes shuffle the DNA “deck” every generation. Even if Adam and Eve had only two sets of chromosomes each, the number of possible combinations they could produce is mind-boggling.

    If you define an allele by specific DNA positions rather than whole genes, two individuals can carry four unique sets of genomic information. Calculations show that this is sufficient to explain the vast majority of common genetic variants found in humans today without needing millions of years of mutation. In fact, most allelic diversity can be explained by only two “major” alleles.

    In short, the problem isn’t that two people can’t produce diversity; it’s that critics assume the starting pair had no diversity to begin with.

    Part VI: A Dilemma, a Ratchet, and Other Problems

    Before we go further in-depth in our explanation of CHNP, we must realise the scope of the problems with evolution. It is not just that the mechanisms are insufficient for creating novelty, that would be one thing. But we see there are insurmountable “gaps” everywhere you turn in the modern synthesis.

    The “Waiting Time” Problem

    The evolutionary model relies on random mutations to generate new genetic information. However, recent numerical simulations reveal a profound waiting time problem. Beneficial mutations are incredibly rare, and waiting for specific strings of nucleotides (genetic letters) to arise and be fixed in a population takes far too long.

    For example, establishing a specific string of just two new nucleotides in a hominin population would take an average of 84 million years. A string of five nucleotides would take 2 billion years. There simply isn’t enough time in the evolutionary timeline (e.g., 6 million years from a chimp-like ancestor to humans) to generate the necessary genetic information from scratch.

    Haldane’s Dilemma

    In 1957, the evolutionary geneticist J.B.S. Haldane calculated that natural selection is not free; it has a biological “cost”. For any specific genetic variant (mutation) to increase in a population, the individuals without that trait must effectively be removed from the gene pool—either by death or by failing to reproduce.

    This creates a dilemma for the evolutionary narrative:

    A population only has a limited surplus of offspring available to be “spent” on selection. If a species needs to select for too many traits at once, or eliminate too many mutations, the required death rate would exceed the reproductive rate, driving the species to extinction.

    Haldane calculated that for a species with a low reproductive rate like humans, the cost of fixing just one beneficial mutation would require roughly 300 generations. This speed is far too slow to explain the complexity of the human genome, even within the evolutionary timescale of millions of years.

    Rarity of Function

    From the perspective of Dr. Douglas Axe, a molecular biologist and Director of the Biologic Institute, there is a mathematically fatal challenge to the Darwinian narrative. His research focuses on the “rarity of function”—specifically, how difficult it is to find a functional protein sequence among all possible combinations of amino acids.

    Proteins are chains of amino acids that must fold into precise three-dimensional shapes to function. There are 20 different amino acids available for each position in the chain. If you have a modest protein that is 150 amino acids long, the number of possible arrangements is 20^150. This number is roughly 10^195. To put this in perspective, there are only about 10^80 atoms in the entire observable universe.

    The “search space” of possible combinations is unimaginably vast. Evolutionary theory assumes that “functional” sequences (those that fold and perform a task) are common enough that random mutations can stumble upon them. Dr. Axe tested this assumption experimentally using a 150-amino-acid domain of the beta-lactamase enzyme. In his seminal 2004 paper published in the Journal of Molecular Biology, Axe determined the ratio of functional sequences to non-functional ones.

    He calculated that the probability of a random sequence of 150 amino acids forming a stable, functional fold is approximately 1 in 10^77. This rarity is catastrophic for evolution. To find just one functional protein fold by chance would be like a blindfolded man trying to find a single marked atom in the entire Milky Way galaxy. Because functional proteins are so isolated in sequence space, natural selection cannot help “guide” the process.

    Natural selection only works after a function exists. It cannot select a protein that doesn’t work yet. Axe describes functional proteins as tiny, isolated islands in a vast sea of gibberish. This is precisely the Valley of Death we discussed earlier. You cannot “gradually” evolve from one island to another because the space between them is lethal (non-functional). Even if the entire Earth were covered in bacteria dividing rapidly for 4.5 billion years, the total number of mutational trials would be roughly 10^40. This is nowhere near the 10^77 trials needed to statistically guarantee finding a single new protein fold.

    Muller’s Ratchet

    While Haldane highlighted the cost and Axe showed the scale, Muller showed the trajectory. Muller’s Ratchet describes the mechanism of irreversible decline. The genome is not a pool of independent genes; it is organized into “linkage blocks”—large chunks of DNA that are inherited together.

    Because beneficial mutations (if they occur) are physically linked to deleterious mutations on the same chromosome segment, natural selection cannot separate them. As deleterious mutations accumulate within these linkage blocks, the overall genetic quality of the block declines. Like a ratchet that only turns one way, the damage locks in. The “best” class of genomes in the population eventually carries more mutations than the “best” class of the previous generation. Over time, every linkage block in the human genome accumulates deleterious mutations faster than selection can remove them. There is no mechanism to reverse this damage, leading to a continuous, downward slide in genetic information.

    Genetic Entropy

    According to Dr. Sanford, these factors together create a lethal dilemma for the standard evolutionary model. The combination of high mutation rates, vast fitness landscapes, the high cost of selection, and physical linkage ensures that the human genome is rusting out like an old car, losing information with every generation.

    If humanity had been accumulating mutations for millions of years, our genome would have already reached “error catastrophe,” and we would be extinct. Alexey Kondrashov described this phenomenon in his paper, “Why Have We Not Died 100 Times Over?” The fact that we are still here suggests we have only been mutating for thousands, not millions, of years.

    The vast majority of mutations are harmful or “nearly neutral” (slightly harmful but invisible to natural selection). These mutations accumulate every generation. Human mutation rates indicate we are accumulating about 100 new mutations per person per generation. If humanity were hundreds of thousands of years old, we would have gone extinct from this genetic load.

    Created Heterozygosity aligns with this reality. It posits a perfect, highly diverse starting point that is slowly losing information over time, rather than a simple starting point struggling to build information against the tide of entropy. The observed degeneration is also consistent with the Biblical account of a perfect Creation that was subjected to corruption and decay following the Fall.

    Rapid Speciation

    Proponents of CHNP do not believe in the “fixity of species.” Instead, they observe that species change and diversify over time—often rapidly. This is called “cis-evolution” (diversification within a kind) rather than “trans-evolution” (changing from one kind to another).

    Speciation often occurs when a sub-population becomes isolated and loses some of its initial genetic diversity, shifting from a heterozygous state to a more homozygous state. This reveals specific traits (phenotypes) that were previously hidden (recessive). These changes will inevitably make two populations reproductively isolated or incompatible over several generations. This particular form of speciation is sometimes called Mendelian speciation.

    Real-world examples of this can easily be found. We see this in the rapid diversification of cichlid fish in African lakes, which arose from “ancient common variations” rather than new mutations. We also see it in Darwin’s finches, where hybridization and isolation lead to rapid changes in beak size and shape. In fact, this phenomenon is so prevalent that it has its own name in the literature—contemporary evolution.

    Darwin himself noted that domestic breeds (like dogs or pigeons) show more diversity than wild species. If humans can produce hundreds of dog breeds in a few thousand years by isolating traits, natural processes acting on created diversity could easily produce the wild species we see (like zebras, horses, and donkeys) from a single created kind in a similar timeframe.

    Molecular Clocks

    Finally, when we look at Mitochondrial DNA (mtDNA)—which is passed down only from mothers—we find a “clock” that fits the biblical timeline perfectly.

    The number of mtDNA differences between modern humans fits a timescale of about 6,000 years, not hundreds of thousands. While mtDNA clocks suggest a recent mutation accumulation, nuclear DNA differences are too numerous to be explained by mutation alone in 6,000 years. This confirms that the nuclear diversity must be frontloaded (original variety), while the mtDNA diversity represents mutational history.

    Conclusion

    The Created Heterozygosity model explains the origin of species by recognizing that God engineered life with the capacity to adapt, diversify, and fill the earth. It accounts for the massive genetic variation we see today without ignoring the mathematical impossibility of evolving that information from scratch. Rather than being a reaction against science, this model embraces modern genetic data—from the limits of natural selection to the reality of genetic entropy—to provide a robust history of life.

    Part VII: Created Heterozygosity & Natural Processes

    The evidence for Created Heterozygosity, specifically within the Created Heterozygosity & Natural Processes (CHNP) model, makes several important predictions that distinguish it from the standard Darwinian explanations.

    Prediction 1: “Major” Allelic Architecture

    If the created heterozygosity is correct, each gene locus of the human line should feature no more than four predominant alleles encoding functional, distinct proteins. This is a prediction based on Adam and Eve having a total of four genome copies altogether. This prediction can be refined, however, to be even more particular.

    Based on an analysis of the ABO gene within the Created Heterozygosity and Natural Processes (CHNP) model, the evidence suggests there were only two major alleles in the original created pair (Adam and Eve), rather than the theoretical maximum of four, for the following reasons:

    1. Only A and B are Functionally Distinct “Major” Alleles

    While a single pair of humans could theoretically carry up to four distinct alleles (two per person), the molecular data for the ABO locus reveals only two distinct, functional genetic architectures: A and B. The A and B alleles code for functional glycosyltransferase enzymes. They differ from each other by only seven nucleotides, four of which result in amino acid changes that alter the enzyme’s specificity. In an analysis of 19 key human functional loci, ABO is identified as having “dual majors.” These are the foundational, optimized alleles that are highly conserved and predate human diversification. Because A and B represent the only two functional “primordial” archetypes, the CHNP model posits that the original ancestors possessed the optimal A/B heterozygous genotype.

    2. The ‘O’ Allele is a Broken ‘A.’

    The reason there are not three (or four) original alleles (e.g., A, B, and O) is that the O allele is not a distinct, original design. It is a degraded version of the A allele.

    The most common O allele (O01) is identical to the A allele except for a single guanine deletion at position 261. This deletion causes a frameshift mutation, resulting in a truncated, non-functional enzyme. Because the O allele is simply a broken A allele, it represents a loss of information (genetic entropy) rather than originally created diversity. The CHNP model predicts that initial kinds were highly functional and optimized, containing no non-functional or suboptimal gene variants. Therefore, the non-functional O allele would not have been present in the created pair but arose later through mutation.

    3. AB is Optimal For Both Parents

    A critical medical argument for the AB genotype in both parents (and therefore 2 Major created alleles) concerns the immune system and pregnancy. The CHNP model suggests that an optimized creation would minimize physiological incompatibility between the first mother and her offspring.

    In the ABO system, individuals naturally produce antibodies against the antigens they lack. A person with Type ‘A’ blood produces anti-B antibodies; a person with Type ‘B’ produces anti-A antibodies; and a person with Type O produces both.

    Individuals with Type AB blood produce neither anti-A nor anti-B antibodies because they possess both antigens on their own cells.

    If the original mother (Eve) were Type A, she would carry anti-B antibodies, which could potentially attack a Type B or AB fetus (Hemolytic Disease of the Newborn). However, if she were Type AB, her immune system would tolerate fetuses of any blood type (A, B, or AB) because she lacks the antibodies that would attack them.

    If there were more than two original antigens, these problems would be inevitable. The only solution is for both parents to share the same two antigens.

    4. Disclaimer about scope

    This, along with many other examples within the gene catalogue, suggests most, if not all, original gene loci were bi-allelic for homozygosity. This is not to say all were, as we do not have definitive proof of that, and there are several, e.g., immuno-response genes, loci which could theoretically have more than two Majors. However, it is highly likely that all genetic diversity can be explained by bi-genome, and not quad-genome, diversity. Greater modern diversity, if present, can consistently be partitioned into two functional clades, with subsidiary alleles emerging via SNPs, InDels, or recombinations over short timescales.

    Prediction 2: Cross-Species Conservation

    Having similar genes is essential in a created world in order for ecosystems to exist; it shouldn’t be surprising that we share DNA with other organisms. From that premise, it follows that some organisms will be more or less similar, and those can be categorized. Due to the laws of physics and chemistry, there are inherent design constraints on forms of biota. Due to this, it is expected that there will be functional genes that are shared throughout life where they are applicable. For instance, we share homeobox genes with much of terrestrial life, even down to snakes, mice, flies, and worms. These genes are similar because they have similar functions. This is precisely what we would predict from a design hypothesis.

    Both models (CHNP and EES) predict that there will be some shared functional operations throughout all life. Although this prediction does lean more in favor of a design hypothesis, it is roughly agnostic evidence. However, what is a differentiating prediction is that “major” alleles will persist across genera, reflecting shared functional design principles, whereas non-functional variants will be species-specific. This prediction is due to the two models ’ different understandings of the power of evolutionary processes to explain diversity.

    This prediction can be tested (along with the first) by examining allelic diversity (particularly in sequence alignment) across related and non-related populations. For instance, take the ABO blood type gene again. The genetic data confirm that functional “major” alleles are conserved across species boundaries, while non-functional variants are species-specific and recent.

    1. Major Alleles (A and B): Shared Functional Design

    Both models acknowledge that the functional A and B alleles are shared between humans and other primates (and even some distinct mammals). However, the interpretation differs, and the CHNP model posits this as evidence of major allelic architecture—original, front-loaded functional templates.

    The functional A and B alleles code for specific glycosyltransferase enzymes. Sequence analysis shows that humans, chimpanzees, and bonobos share the exact same genetic basis for these polymorphisms. This fits the prediction that “major” alleles represent the optimized, original design. Because these alleles are functional, they are conserved across genera (trans-species), reflecting a common design blueprint rather than convergent evolution or deep-time descent.

    Standard evolution attributes this to “trans-species polymorphism,” arguing that these alleles have been maintained by “balancing selection” for 20 million years, predating the divergence of humans and apes.

    2. Non-Functional Alleles (Type O): The Differentiating Test

    The crucial test arises when examining the non-functional ‘O’ allele. Because the ‘O’ allele confers a survival advantage against severe malaria, the standard evolutionary model must do one of the following: 1) explain why it is not the case that A and B, the ‘O’ alleles are not all three ancient and shared across lineages (trans-species inheritance), or provide an example of a shared ‘O’ allele across a kind-boundary. The reason why this prediction must follow is that the ‘O’ allele, being the null version, by evolutionary definition must have existed prior to either ‘A’ or ‘B’. What’s more, ‘A’ and ‘B’ alleles can easily break and the ‘O’ is not significant enough to be selected out of a given population.

    In humans, the most common ‘O’ allele (O01) results from a specific single nucleotide deletion (a guanine deletion at position 261), causing a frameshift that breaks the enzyme. However, sequence analysis of chimpanzees and other primates reveals that their ‘O’ alleles result from different, independent mutations.

    Human and non-human primate ‘O’ alleles are species-specific and result from independent silencing mutations. The mutation that makes a chimp Type ‘O’ is not the same mutation that makes a human Type ‘O’.

    This supports the CHNP prediction that non-functional variants arise after the functional variants from recent genetic entropy (decay) rather than ancient ancestry. The ‘O’ allele is not a third “created” allele; it is a broken ‘A’ allele that occurred independently in humans and chimps after they were distinct populations. It has become fixated in populations, such as those native to the Americas, due to the beneficial nature of the gene break.

    This brings us, also, back to the evolutionary problems we mentioned. Even if these four or more beneficial mutations could occur to create one ‘A’ or ‘B’ allele, which we discussed as being incredibly unlikely, either gene would break likely at a faster rate (due to Muller’s Ratchet) than could account for the fixity of A and B in primates and other mammals.

    3. Timeline and Entropy

    The mutational pathways for the human ‘O’ allele fit a timeline of <10,000 years, appearing after the initial “major” alleles were established. This aligns with the CHNP view that variants arise via minimal genetic changes (SNPs, Indels) within the last 6,000–10,000 years.

    The emergence of the ‘O’ allele is an example of cis-evolution (diversification within a kind via information loss). It involves breaking a functional gene to gain a temporary survival advantage (malaria resistance), which is distinct from the creation of new biological information.

    4. Broader Loci Analysis

    This pattern is not unique to ABO. An analysis of 19 key human functional loci (including genes for immunity, metabolism, and pigmentation) confirms the “Major Allele” prediction:

    Out of the 19 loci, 16 exhibit a single (or dual, like ABO) major functional allele that is highly conserved across species. Meaning that the functional versions of the genes are shared with other primates, mammals, vertebrates, or even eukaryotes. In contrast, non-functional or pathogenic variants (such as the CCR5-Δ32 deletion or CFTR mutations) are predominantly human-specific and arose recently (often <10,000 years ago). And when similar non-functional traits appear in different species (e.g., MC1R-loss, or ‘O’ blood group), they are due to convergent, independent mutations, not shared ancestry.

    To illustrate this point, below is a graph from the paper testing the CHNP model in 19 functional genes. Table 1 summarizes key metrics for each locus. Across the dataset, 84% (16/19) exhibit a single major functional allele conserved >90% across mammals/primates, with variants emerging <50,000 years ago (kya). ABO and HLA-DRB1 align with dual ancient clades; SLC6A4 shows neutral biallelic drift. Non-functional variants (e.g., nulls, deficiencies) are human-specific in 89% of cases, often via single SNPs/InDels.

    LocusMajor Allele(s)Functional Groups (Ancient?)Cross-Species ConservationVariant Derivation (Changes/Time)Model Fit (1/2/3)
    HLA-DRB1Multiple lineages (e.g., *03, *04)2+ ancient clades (pre-Homo-Pan)High in primates (trans-species)Recombinations/SNPs; post-speciation (~100 kya)Strong (clades); Partial (multi); Strong
    ABOA/B (O derived)2 ancient (A/B trans-species)High in primatesInactivation (1 nt del.); <20 kyaStrong; Strong; Strong
    LCTAncestral non-persistent (C/C)1 majorHigh across mammalsSNPs (e.g., -13910T); ~10 kyaStrong; Strong; Strong
    CFTRWild-type (non-ΔF508)1 majorHigh across vertebrates3 nt del. (ΔF508); ~50 kyaStrong; Strong; Strong
    G6PDWild-type (A+)1 majorHigh (>95% identity)SNPs at conserved sites; <10 kyaStrong; Strong; Strong
    APOEε4 (ancestral)1 major (ε3/2 derived)High across mammalsSNPs (Arg158Cys); <200 kyaStrong; Strong; Partial
    CYP2D6*1 (wild-type)1 majorModerate in primatesDeletions/duplications; recentStrong; Partial; Strong
    FUT2Functional secretor1 majorHigh in vertebratesTruncating SNPs; ancient nulls (~100 kya)Strong; Strong; Partial
    HBBWild-type (HbA)1 majorHigh across vertebratesSNPs (e.g., sickle Glu6Val); <10 kyaStrong; Strong; Strong
    CCR5Wild-type1 majorHigh in primates32-bp del.; ~700 yaStrong; Strong; Strong
    SLC24A5Ancestral Ala111 (dark skin)1 majorHigh across vertebratesThr111 SNP; ~20–30 kyaStrong; Strong; Strong
    MC1RWild-type (eumelanin)1 majorHigh across mammalsLoss-of-function SNPs; convergent in someStrong; Partial (conv.); Strong
    ALDH2Glu504 (active)1 majorHigh across eukaryotesLys504 SNP; ~2–5 kyaStrong; Strong; Strong
    HERC2/OCA2Ancestral (brown eyes)1 majorHigh across mammalsrs12913832 SNP; ~10 kyaStrong; Strong; Strong
    SERPINA1M allele (wild-type)1 majorHigh in mammals (family expansion)SNPs (e.g., PiZ Glu342Lys); recentStrong; Strong; Strong
    BRCA1Wild-type1 majorHigh in primatesFrameshifts/nonsense; <50 kyaStrong; Strong; Strong
    SLC6A4Long/short 5-HTTLPR2 neutrally evolvedHigh across animalsInDel (VNTR); ancient (~500 kya)Partial; Strong; Partial
    PCSK9Wild-type1 majorHigh in primates (lost in some mammals)SNPs (e.g., Arg469Trp); recentStrong; Strong (conv. loss); Strong
    EDARVal370 (ancestral)1 majorHigh across vertebratesAla370 SNP; ~30 kyaStrong; Strong; Strong

    Table 1: Evolutionary Profiles of Analyzed Loci. Model Fit: Tenet 1 (major architecture), 2 (conservation), 3 (derivation). “Partial” indicates minor deviations (e.g., multi-clades or potentially >10 kya).

    This is devastating for modern synthesis. If the pattern that arises is one of shared functions and not shared mistakes, the theory is dead on arrival.

    Prediction 3: Derivation Dynamics

    Another important prediction to consider is due to the timeline for creating heterozygosity. If life were designed young (an entailment for CHNP), variant alleles must have arisen from “majors” through minimal modifications, feasible within roughly 6 to 10 thousand years.

    To look at the ABO blood group once more, we see the total feasibility of this prediction. The ABO blood group, again, offers a “cornerstone” example, demonstrating how complex diversity collapses into simple, recent mutational events.

    1. The ABO Case Study: Minimal Modification

    The CHNP model identifies the A and B alleles as the original, front-loaded “major” alleles created in the founding pair. The diversity we see today (such as the various O alleles and A subtypes) supports the prediction of minimal, recent modification:

    As we’ve discussed, the most common O allele (O01) is not a novel invention; it is a broken ‘A’ allele. It differs from the ‘A’ allele by a single guanine deletion at position 261. This minute change causes a frameshift that renders the enzyme non-functional. Other ABO variants show similar minimal changes. The A2 allele (a weak version of A) results from a single nucleotide deletion and a point mutation. The B3 allele results from point mutations that reduce enzymatic activity.

    These are not complex architectural changes requiring millions of years. They are “typos” in the code. Molecular analysis confirms that the mutation causing the O phenotype is a common, high-probability event.

    2. The Mathematical Feasibility of the Timeline

    A mathematical breakdown can be used to demonstrate that these variants would inevitably arise within a young-earth timeframe using standard mutation rates.

    Using a standard mutation rate (1.5×10^−8 per base pair per generation) and an exponentially growing population (starting from founders), mutations accumulate rapidly and easily. Calculations suggest that in a population growing from a small founder group, the first expected mutations in the ABO exons would appear as early as Generation 4 (approx. 80 years). Over a period of 5,000 years, with a realistic population growth model, the 1,065 base pairs of the ABO exons would theoretically experience tens of thousands of mutation events. The gene would be thoroughly saturated, meaning virtually every possible single-nucleotide change would have occurred multiple times.

    Specific estimates for the emergence of the ‘O’ allele place it within 50 to 500 generations (1,000 to 10,000 years) under neutral drift, or even faster with selective pressure. This perfectly fits the CHNP timeline of 6,000-10,000 years.

    3. Further Validation: The 19 Loci Analysis

    This pattern of “Ancient Majors, Recent Variants” is not unique to ABO. The 19 key human functional loci study also confirms that this is a systemic feature of the human genome.

    Across genes involved in immunity, metabolism, and pigmentation, derived variants consistently appear to have arisen within the last 10,000 years (Holocene). ALDH2: The variant causing the “Asian flush” (Glu504Lys) is estimated to be ~2,000 to 5,000 years old. LCT (Lactase Persistence): The mutation allowing adults to digest milk arose ~10,000 years ago, coinciding with the advent of dairy farming. HBB (Sickle Cell): The hemoglobin variant conferring malaria resistance emerged <10,000 years ago. In 89% of the analyzed cases, these variants are caused by single SNPs or Indels derived from the conserved major allele.

    The prediction that variant alleles must be derived via minimal modifications feasible within a young timeframe is strongly supported by the genetic data. The ABO system demonstrates that the “O” allele is merely a single deletion that could arise in less than 100 generations.

    This confirms the CHNP view that while the “major” alleles (A and B) represent the original, complex design (Major Allelic Architecture), the variants (O, A2, etc.) are the result of recent, rapid genetic entropy (cis-evolution) that requires no deep-time evolutionary mechanisms to explain.

    An ABO Blood Group Paradox

    As we have run through these first three predictions of the Created Heterozygosity model, we have dealt particularly with the ABO gene and have run into a peculiar evolutionary puzzle. Let’s first speak of this paradox more abstractly in the form of an analogy:

    Imagine a family of collectors who passed down two distinct types of antique coins (Coins A and B) to their descendants over centuries because those coins were valuable. If a third type of coin (Coin O) was also extremely valuable (offering protection/advantage) and easier to mint, you would predict the Ancestors would have kept Coin O and passed it down to both lineages alongside A and B. You wouldn’t expect the descendants to inherit A and B from the ancestor, but have to invent Coin O continuously from scratch every generation.

    By virtue of this same logic, evolutionary models must predict that the ‘O’ allele should be ancient (20 million years) due to balancing selection. However, the genetic data shows ‘O’ alleles are recent and arose independently in different lineages. This supports the CHNP view: the original ancestors were created with functional A and B alleles (heterozygous), and the O allele is a recent mutational loss of function.

    Prediction 4: Rapid Speciation and Adaptive Radiation

    If created heterozygosity is true, and organisms were designed with built-in potential for adaptation given their environment, then we should expect to find mechanisms of extreme foresight that permit rapid change to external stressors. There are, in fact, many such mechanisms which are written about in the scientific literature: contemporary evolution, natural genetic engineering, epigenetics, higher agency, continuous environmental tracking, non-random evolution, evo-devo, etc.

    The phenomenon of adaptive radiation—where a single lineage rapidly diversifies into many species—is clearly differentiating evidence for front-loaded heterozygosity rather than mutational evolution. Why? Because random mutation has no foresight. Random mutations do not prepare an organism for any eventuality. If it is not useful now, get rid of it. That is the mantra of evolutionary theory. That is the premise of natural selection. However, this premise is drastically mistaken.

    1. Natural Genetic Engineering & Non-Random Evolution

    The foundation of this alternative view is that genetic change is not accidental. Molecular biologist James Shapiro argues that cells are not passive victims of random “copying errors.” Instead, they possess “active biological functions” to restructure their own genomes. Cells can cut, splice, and rearrange DNA, often using mobile genetic elements (transposons) and retroviruses to rewrite their genetic code in response to stress. Shapiro calls the genome a “read-write” database rather than a read-only ROM.

    Building on this, Dr. Lee Spetner proposed that organisms have a built-in capacity to adapt to environmental triggers. These changes are not rare or accidental but can occur in a large fraction of the population simultaneously. This work is supported by modern research from people like Dr. Michael Levin and Dr. Dennis Noble. Mutations are revealing themselves to be more and more a predictable response to environmental inputs.

    2. The Architecture: Continuous Environmental Tracking (CET)

    If organisms engineer their own genetics, how do they know when to do it? This is where CET provides the engineering framework.

    Proposed by Dr. Randy Guliuzza, CET treats organisms as engineered entities. Just like a self-driving car, organisms possess input sensors (to detect the environment), internal logic/programming (to process data), and output actuators (to execute biological changes). In Darwinism, the environment is the “selector” (a sieve). In CET, the organism is the active agent. The environment is merely the data the organism tracks. For example, blind cavefish lose their eyes not because of random mutations and slow selection, but because they sense the dark environment and downregulate eye development to conserve energy, a process that is rapid and reversible. More precisely, the regulatory system of these cave fish specimens can detect the low salinity of cave water, which triggers the effect of blindness over a short timeframe.

    3. The Software: Epigenetics

    Epigenetics acts as the “formatting” or the switches for the DNA computer program. Epigenetic mechanisms (like methylation) regulate gene expression without changing the underlying DNA sequence. This allows organisms to adapt quickly to environmental cues—such as plants changing flowering times or root structures. These changes can be heritable. For instance, the environment of a parent (e.g., diet, stress) can affect the development of the offspring via RNA absorbed by sperm or eggs, bypassing standard natural selection. This blurs the line between the organism and its environment, facilitating rapid adaptation.

    4. The Result: Contemporary Evolution

    When these internal mechanisms (NGE, CET, Epigenetics) function, the result is Contemporary Evolution—observable changes happening in years or decades, not millions of years. Conservationists and biologists are observing “rapid adaptation” in real-time. Examples include invasive species changing growth rates in under 10 years, or the rapid diversification of cichlid fish in Lake Victoria.

    For Young Earth Creationists (YEC), Contemporary Evolution validates the concept of Rapid Post-Flood Speciation. It shows that getting from the “kinds” on Noah’s Ark to modern species diversity in a few thousand years is biologically feasible.

    Conclusion

    So, where does the information for all this diversity come from? This is the specific model (CHNP) that explains the source of the variation being tracked and engineered.

    This model posits that original kinds were created as pan-heterozygous (carrying different alleles at almost every gene locus). As populations grew and migrated (Contemporary Evolution), they split into isolated groups. Through sexual reproduction (recombination), the original heterozygous traits were shuffled. Over time, specific traits became “fixed” (homozygous), leading to new species.

    This model argues that random mutation cannot bridge the gap between distinct biological forms (the Valley of Death) due to toxicity and complexity. Therefore, diversity must be the result of sorting pre-existing (front-loaded) functional alleles rather than creating new ones from scratch.

    Look at it this way:

    1. Mendelian Speciation/Created Heterozygosity is the Resource: It provides the massive library of latent genetic potential (front-loaded alleles).

    2. Continuous Environmental Tracking is the Control System: It uses sensors and logic to determine which parts of that library are needed for the current environment.

    3. Epigenetics and Natural Genetic Engineering are the Mechanisms: They are the tools the cells use to turn genes on/off (epigenetics) or restructure the genome (NGE) to express those latent traits.

    4. Contemporary Evolution is the Observation: It is the visible, rapid diversification (cis-evolution) we see in nature today as a result of these internal systems working on the front-loaded information.

    Together, these concepts argue that organisms are not passive lumps of clay shaped by external forces (Natural Selection), but sophisticated, engineered systems designed to adapt and diversify rapidly within their kinds.

    The mechanism driving this diversity is the recombination of pre-existing heterozygous genes. Just 20 heterozygous genes can theoretically produce over one million unique homozygous phenotypes. As populations isolate and speciate, they lose their initial heterozygosity and become “fixed” in specific traits. This process, known as cis-evolution, explains diversity within a kind (e.g., wolves to dog breeds) but differs fundamentally from trans-evolution (evolution between kinds), which finds no mechanism in genetics.

    The CHNP model argues that mutations are insufficient to create the original genetic information due to thermodynamic and biological constraints. De novo protein creation is hindered by a “Valley of Death”—a region of sequence space where intermediate, misfolded proteins are toxic to the cell. Natural selection eliminates these intermediates, preventing the gradual evolution of novel protein folds.

    Mechanisms often cited as creative, such as gene duplication or recombination, are actually “remixing engines.” Duplication provides redundancy, not novelty, and recombination shuffles existing alleles without creating new genetic material. Because mutations are modifications (typos) rather than creations, the original functional complexity must have been present at the beginning.

    Genetics reveals that organisms contain “latent” or hidden information that can be expressed later.

    Information can be masked by dominant alleles or epistatic interactions (where one gene suppresses another). This allows phenotypic traits to remain hidden for generations and reappear suddenly when genetic combinations shift, facilitating rapid adaptation without new mutations.

    Genetic elements like transposons can reversibly activate or deactivate genes (e.g., in grape color or peppered moths), acting as switches for pre-existing varieties rather than creators of new genes.

    Summary

    The genetic evidence for created heterozygosity rests on the observation that biological novelty is ancient and conserved, while variation is recent and degenerative. By starting with ancestors endowed with high levels of heterozygosity, the “forest” of life’s diversity can be explained by the rapid sorting and recombination of distinct, front-loaded genetic programs.

  • Human Eyes – Optimized Design

    Human Eyes – Optimized Design

    Is the human eye poorly designed? Or is it optimal?

    If you ask most proponents of modern evolutionary theory, you will often hear that the eye is a pinnacle of unfortunate evolutionary history and dysteleology.

    There are three major arguments that are used in defending this view:

    The human eye:

    1. is inverted (retina) and wired backwards
    2. has a blind spot due to nerve exit
    3. Is fragile due to retinal detachment

    #1 THE HUMAN EYE IS INVERTED

    The single most famous critique is, of course, the backward wiring of the retina. An optimal sensor should use its entire surface area for data collection, right? The vertebrate eye requires obstruction of the eye-path by axons and capillaries before it hits the photoreceptors.

    Take the cephalopod eye: it has an everted retina, the photo receptors face the light and the nerves are behind them meaning there is no need for a blind spot. The human reversed wiring represents a mere local (rather than global) maximum where the eye could only optimize so far due to its evolutionary history.

    Yet, this argument misses non-negotiable constraints. There is a metabolic necessity for the human eye which doesn’t exist in the squid or octopus.

    Photoreceptors (the rods and cones) have the highest metabolic rate of any cell in the body. They generate extreme heat and oxygen levels and undergo constant repair from constant reaction from photons. The energy demand is massive. This is an issue of thermoregulation, not just optics.

    The reason this is important is because the vertebrate eye is structured with an inverted retina precisely for the survival and longevity of these high-energy photoreceptors. These cells require massive, continuous nutrient and oxygen delivery, and rapid waste removal.

    The current inverted orientation is the only geometric configuration that allows the photoreceptors to be placed in direct contact with the Retinal Pigment Epithelium (RPE) and the choroid. The choroid, a vascular layer, serves as the cooling system and high-volume nutrient source, similar to a cooling unit directly attached to a high-performance processor.

    If the retina were wired forward, the neural cabling would form a barrier, blocking the connection between the photoreceptors and the choroid. This would inevitably lead to nutrient starvation and thermal damage. Not only that, but human photoreceptors constantly shed toxic outer segments due to damage, which must be removed via phagocytosis by the RPE. The eye needs the tips of the photoreceptors to be physically embedded in the RPE. 

    If the nerve fibers were placed in front they would form a barrier, preventing waste removal. This specific geometry is a geometric imperative for long-term molecular recycling and allows for eyes that last for 80+ years on the regular.

    Critics often insist however that even given the neural and capillary layers being necessary for metabolism, it is still a poor design because they block or scatter incoming light. 

    Yet, research has demonstrated that Müller glial cells span the thickness of the retina and act as essentially living fiber-optic cables. These cells possess a higher refractive index than the surrounding tissue, which gives them the capability to channel light directly to the cones with minimal scattering.

    So this criticism actually goes from being a poor design choice into an awesome low-pass filter which improves the signal-to-noise ratio and visual acuity of the human eye.

    But wait, there’s more! The neural layers contain yellow pigments (lutein and zeaxanthin) which absorb excess blue and ultraviolet light that is highly phototoxic! This layer is basically a forcefield against harmful rays (photo-oxidative damage) which extends the lifespan of these super delicate sensors.

    #2 THE HUMAN EYE HAS A BLIND SPOT

    However, the skeptics will still push back (which leads to point number 2): But surely a good design would not include a blind spot where the optic nerve runs through! And indeed this point is a fairly powerful one at a glance. But on further inspection, we see that this exit point, where literally millions of nerve fibers bundle together to pass the photoreceptors, is an example of optimized routing and not a critical flaw of any kind.

    This is true for many reasons. For one, by having the nerves bundle into this reinforced exit point, in this way, maximized the structural robustness of the remaining retina. Basically, if it were not this way, and the nerve fibers exited individually or even in small clusters across the retina, it would radically lower the integrity of the whole design. It would make the retina prone to tearing during rapid eye movements (saccades). In other words, we wouldn’t be getting much REM sleep! That, but also, we’d be missing out on most looking around of any kind.

    I’d say, even if that was the only advantage, the loss of a tiny fraction of our visual field is worth the trade-off.

    Second, and this is important, the blind spot is functionally irrelevant. What do I mean by that? I mean that humans were designed with two eyes for the purpose of seeing depth-of-field, i.e., understanding where things are in space. You can’t do that with one eye, so that’s not an option. With two eyes, the functional retina of the left eye covers the blind spot of the right eye, and vice versa. There is no problem in this design if both the vision is covered and depth-of-field are covered 100% accurately: which they are.

    Third, the optic disc is also used for integrated signal processing, containing melanopsin-driven cells that calibrate brightness perception for the entire eye, using the exit cable as a sensor probe. That means that the nerves also detect brightness and run the logistics in a localized region which is incredibly efficient.

    #3 THE HUMAN EYE IS VULNERABLE

    That is, the vulnerability specifically refers to retinal detachment. That is when the neural retina separates from the RPE. Why does this happen? It is a consequence of the retina being held loosely against the choroid, largely by hydrostatic pressure. Critics call this a failure point. Wouldn’t a good design be one where the RPE is solidly in place, especially if it needs to be connected to the retina? Well… no, not remotely.

    The RPE must actively transport massive amounts of fluid (approximately 10 liters per day) out of the subretinal space to the choroid to prevent edema (swelling) and maintain clear vision. A mechanically fused retina would impede this rapid fluid transport and waste exchange. Basically, the critics offer a solution which is really a non-solution. There is no possible way the eye could function at all by the means they suggest as the alternative “superior” version.

    So, what have we learned?

    The human eye is not a collection of accidents, but a masterpiece of constrained optimization. When the entire system (eye and brain) is evaluated, the result is astonishing performance. The eye achieves resolution at the diffraction limit (the theoretical physical limit imposed by the wave nature of light!) at the fovea, meaning it is hitting the maximum acuity possible for an aperture of its size.

    The arguments that the eye is “sub-optimal” often rely on comparing it to the structurally simpler cephalopod eye. Yet, cephalopod eyes lack trichromatic vision (they don’t see color like we do), have lower acuity (on the scale of hundreds of times worse clarity), and only function for a lifespan of 1–2 years (whereas the human eye must self-repair and maintain high performance for eight decades). The eye’s complexity—the Müller cells, the foveal pit, and the inverted architecture—are the necessary subsystems required to achieve this maximal performance within the constraints of vertebrate biology and physics.

    That’s not even getting to things like mitochondrial microlens in our cells which are essential for processing light. Recent research suggests that mitochondria in cone photoreceptors may actually function as micro-lenses to concentrate light, adding another layer of optical optimization. Optimization which would need to be there, perhaps a lot earlier than even the reversed lens structure.

    The fact that the eye is so optimal still remains, despite the critics best attempts at thwarting it. Therefore, the question remains, how could something so optimized evolve by random chance mutation, as well as so early and often in the history of biota?

  • Mutation is not Creation

    Mutation is not Creation

    Evolution is certainly a tricky word.

    For a creationist, it’s clear as day why. There are two equivocal definitions being used which blur the lines and convolute any attempt at productive dialogue.

    “Change in allele frequencies in a population over time.”

    The breakdown: Alleles represent versions of genes in which a part of the gene is different, which often makes the overall functional outcome in some way different.

    The frequencies in a population are the ratio of members with or without an allele.

    Finally, the premise of this definition is that the number of organisms in a population with a certain trait can grow or diminish over time.

    This seems to me a very uncontroversial thing to hold to. Insofar as evolution could be a fact, this is certainly hard to deny.

    All that is needed for this first definition is mechanisms for sorting and redistribution of existing variation.

    However, what is commonly inferred from the term is an altogether separate conception:

    “All living things are descended from a common ancestor.”

    This is clearly different. An evolutionist may agree, but argue that these are merely differences in degree (or scale). But is that the case?

    The only way to know whether the one definition flows seamlessly into the next or whether this is a true equivocation is to understand the underlying mechanism. For instance, let’s talk about movement.

    South America and Asia are roughly four times further apart than Australia and Antarctica. Yet, I could say, rightly, that I could walk from South America to Asia, but I could not say the same about Australia and Antarctica. Why is this? If I can walk four times the distance in one instance, why should I be thus restricted?

    The obvious reason is this: Australia and Antarctica are separated by the entire width of the deep, open Southern Ocean and the Tasman Sea. I should not expect that I can traverse, by walking, two places with no land betwixt them.

    The takeaway is this: My extrapolation is only good so long as my mechanism is sufficient. Walking is only possible with land bridges. Without land bridges, it doesn’t matter the distance; you’re not going to make it.

    This second definition requires mechanisms for sorting and redistribution of existing variation as well as creation of new biological information and structures.

    With that consideration, let us now take this lesson and apply it to the mechanisms of change which evolutionists espouse.

    There are many, but we will quickly narrow our search.

    Natural Selection: This is any process that acts as a culling from the environment (which can be ecology, climate, niche, etc).

    Gene Flow: This is the reproductive isolation of populations.

    Genetic Drift/Draft: This is any process that causes fluctuations in alleles due to a lack of selection pressures.

    Sexual Selection/Non-Random Mating: This is the process by which organisms preferentially choose phenotypes.

    The point of this exercise is to observe that these are all mechanisms of sorting and redistribution of existing variation, but they are not the mechanisms that create that variation in the first place. Any mechanism that lacks creative power is insufficient to account for our second definition.

    The mechanism that is left is, you might have guessed, mutation.

    Here’s the problem: mutation is its own conflation. We need to unravel the many ways in which DNA can change. There are many kinds of mutations, and what’s true for one may not be true for another. For example, it is often said that mutations are:

    1. Copying errors
    2. Creative
    3. Random with respect to fitness

    However, this is hardly the case for many various types of phenomena that are classified as mutations.

    For instance, take recombination.

    Recombination is not a copy error. It is a very particular and facilitated meiotic process that requires deliberate attention and agency.

    Recombination is not creative. Although it can technically cause a change in allele frequencies (as a new genotype is being created), so can every other non-creative process. It can no more create new genetic material than a card shuffler can create new cards.

    Recombination is not random with respect to fitness. Even with recombination, like a card shuffler, being random in one sense, there is a telos about particular random processes that make them constitute something not altogether random. If we take a card shuffler, it is not random with respect to the “fitness” of the card game. In fact, it is specifically designed to make a fairer and balanced game night. Likewise, recombination, particularly homologous recombination (HR), is fundamentally a high-fidelity DNA repair pathway. It is designed to prevent the uninterrupted spread of broken or worse genes within a single genotype. Like the card shuffler, the mechanism of recombination has no foresight, but it has an explicit function nonetheless.

    Besides recombination, there are many discrete ways in which mutations can happen. On the small scale, we see things like Single Nucleotide Polymorphisms (SNPs) and Insertions and Deletions (Indels). Zooming out, we also find mutations such as duplications & deletions of genes or multiple genes (e.g., CNVs), exon shuffling, and transposable elements. On the grand scale, we see events such as whole genome duplications and epigenetic modifications as well.

    On the small scale, Single Nucleotide Polymorphisms (SNPs) and Insertions and Deletions (Indels) are the equivalent of typos or missing characters within an existing blueprint. While a typo can certainly change the meaning of a sentence, it cannot generate a completely new architectural plan. It modifies the existing instruction set; it does not introduce a novel concept or function absent in the original text. These are powerful modifiers, but their action is always upon pre-existing information.

    It is also the case that these mutations can never rightly be called evolution. They are not creative; they are only destructive mechanisms. Copy errors create noise, not clarity, in information systems.

    Further, these small-scale mutations happen within the context of the preexisting structure and integrity of the genome. So that, even those which are said to be beneficial are preordained to be so by some higher design principles. For instance, much work has been done to show that nucleosomes protect DNA from damage and structural variants stabilize regions where they emerge:

    “Structural variants (SVs) tend to stabilize regions in which they emerge, with the effect most pronounced for pathogenic SVs. In contrast, the effects of chromothripsis are seen across regions less prone to breakages. We find that viral integration may bring genome fragility, particularly for cancer-associated viruses.” (Pflughaupt et al.)

    “Eukaryotic DNA is organized in nucleosomes, which package DNA and regulate its accessibility to transcription, replication, recombination, and repair… living cells nucleosomes protect DNA from high-energy radiation and reactive oxygen species.” (Brambilla et al.)

    Moving to the medium scale, consider duplications and deletions (CNVs) and exon shuffling. Gene duplication, often cited as a source of novelty, is simply copying an entire, functional module—a paragraph or even a full chapter. This provides redundancy. It is often supposed that this allows one copy to drift while the original performs its necessary task. But gene duplications are not simply ignored by the genome or selective processes. They are often immediately discarded if they don’t infer a use, or otherwise, they are incorporated in a certain way.

    “Gene family members may have common non-random patterns of origin that recur independently in different evolutionary lineages (such as monocots and dicots, studied here), and that such patterns may result from specific biological functions and evolutionary needs.” (Wang et al.)

    Here we see that there is often a causal link between the needs of the organism and the duplication event itself. Further, we observe a highly selective process of monitoring post-duplication:

    “Recently, a nonrandom process of gene loss after these different polyploidy events has been postulated [12,31,38]. Maere et al. [12] have shown that gene decay rates following duplication differ considerably between different functional classes of genes, indicating that the fate of a duplicated gene largely depends on its function.” (Casneuf et al.)

    Even if the function conferred was redundancy, redundancy is not creation; it is merely an insurance policy for existing information. Where, precisely, is the mechanism that takes that redundant copy and molds it into a fundamentally new structure or process—say, turning a light-sensing pigment gene into a clotting factor? What is the search space that will have to be traversed? Indels and SNPs are not sufficient to modify a duplication into something entirely novel. Novel genes require novel sequences for coding specific proteins and novel sequences for regulation. Duplication at best provides a scratch pad, which is highly sensitive to being tampered with.

    Exon shuffling, similarly, is a process of reorganization, splicing together pre-existing functional protein domains. This is the biological equivalent of an editor cutting and pasting sentences from one section into another. The result can be a new combination, but every word and grammatical rule was already present. It is the sorting and redistribution of parts.

    Further, exon shuffling is a highly regulated process that has been shown to be constrained by splice frame rules and mediated by TEs in introns.

    “Exon shuffling follows certain splice frame rules. Introns can interrupt the reading frame of a gene by inserting a sequence between two consecutive codons (phase 0 introns), between the first and second nucleotide of a codon (phase 1 introns), or between the second and third nucleotide of a codon (phase 2 introns).” (Wikipedia Contributors)

    This Wikipedia article gives you a taste for the precision and intense regulation, prerequisite and premeditated, in order to perform what are essentially surgical operations to create specialized proteins for cellular operation. One of the reasons it is such a delicate process is portrayed in this journal article:

    “Successful shuffling requires that the domain in question is bordered by introns that are of the same phase, that is, that the domain is symmetrical in accordance with the phase-compatibility rules of exon shuffling (Patthy 1999b), because shuffling of asymmetrical exons/domains will result in a shift of the reading frame in the downstream exons of recipient genes.” (Kaessmann)

    In the same way, transposable elements are constrained by the epigenetic and structural goings-on of the genome. Research shows that transposase recognizes DNA structure at insertion sites, and there are physical constraints caused by chromatin:

    “We show that all four of these measures of DNA structure deviate significantly from random at P element insertion sites. Our results argue that the donor DNA and transposase complex performing P element integration may recognize a structural feature of the target DNA.” (Liao Gc et al.)

    Finally, we look at the grand scale. Whole Genome Duplication (WGD) is the ultimate copy-paste—duplicating the entire instructional library. Again, this provides massive redundancy but offers zero novel genetic information. This is not creative in any meaningful sense, even at the largest scale.

    As for epigenetic modifications, these are critical regulatory mechanisms that determine when and how existing genes are expressed. They are the rheostats and switches of the cell, changing the output and timing without ever altering the source code (the DNA sequence). They are regulatory, not informational creators.

    The central issue remains: The second definition of evolution requires the creation of new organizational blueprints and entirely novel biological functions.

    The mechanism of change relied upon—mutation—is, across all its various types, fundamentally a system of copying, modification, deletion, shuffling, or regulation of existing, functional genetic information. None of these phenomena, regardless of their scale, demonstrates the capacity to generate the required novel information (the “land bridge”) necessary to traverse the vast gap between one kind of organism and another. Again, they are really great mechanisms for change over time, but they are pitiable creative mechanisms.

    Therefore, the argument that the two definitions of evolution are merely differences of scale falls apart. The extrapolation from observing a shift in coat color frequency (Definition 1) to positing a common ancestor for all life (Definition 2) is logically insufficient. It requires a creative mechanism that is qualitatively different from the mechanisms of sorting and modification we observe. Lacking that demonstrated, information-generating mechanism, we are left with two equivocal terms, where one is an undeniable fact of variation and the other is an unsupported inference of mechanism—a proposal to walk across the deep, open ocean with only the capacity to walk on land.

    Works Cited

    Brambilla, Francesca, et al. “Nucleosomes Effectively Shield DNA from Radiation Damage in Living Cells.” Nucleic Acids Research, vol. 48, no. 16, 10 July 2020, pp. 8993–9006, pmc.ncbi.nlm.nih.gov/articles/PMC7498322/, https://doi.org/10.1093/nar/gkaa613. Accessed 30 Oct. 2025.

    Casneuf, Tineke, et al. “Nonrandom Divergence of Gene Expression Following Gene and Genome Duplications in the Flowering Plant Arabidopsis Thaliana.” Genome Biology, vol. 7, no. 2, 2006, p. R13, https://doi.org/10.1186/gb-2006-7-2-r13. Accessed 7 Sept. 2021.

    Kaessmann, H. “Signatures of Domain Shuffling in the Human Genome.” Genome Research, vol. 12, no. 11, 1 Nov. 2002, pp. 1642–1650, https://doi.org/10.1101/gr.520702. Accessed 16 Jan. 2020.

    Liao Gc, et al. “Insertion Site Preferences of the P Transposable Element in Drosophila Melanogaster.Proceedings of the National Academy of Sciences of the United States of America, vol. 97, no. 7, 14 Mar. 2000, pp. 3347–3351, https://doi.org/10.1073/pnas.97.7.3347. Accessed 1 Dec. 2023.

    Pflughaupt, Patrick, et al. “Towards the Genomic Sequence Code of DNA Fragility for Machine Learning.” Nucleic Acids Research, vol. 52, no. 21, 23 Oct. 2024, pp. 12798–12816, https://doi.org/10.1093/nar/gkae914. Accessed 8 Nov. 2025.

    Wang, Yupeng, et al. “Modes of Gene Duplication Contribute Differently to Genetic Novelty and Redundancy, but Show Parallels across Divergent Angiosperms.” PLoS ONE, vol. 6, no. 12, 2 Dec. 2011, p. e28150, https://doi.org/10.1371/journal.pone.0028150. Accessed 20 Dec. 2021.

    Wikipedia Contributors. “Exon Shuffling.” Wikipedia, Wikimedia Foundation, 31 Oct. 2025, en.wikipedia.org/wiki/Exon_shuffling.

  • Evidence for an Active Alternative Promoter in the Human DDX11L2 Gene

    Evidence for an Active Alternative Promoter in the Human DDX11L2 Gene

    Abstract

    The human genome contains numerous regulatory elements that control gene expression, including canonical and alternative promoters. While DDX11L2 is annotated as a pseudogene, its functional relevance in gene regulation has been a subject of interest. This study leverages publicly available genomic data from the UCSC Genome Browser, integrating information from the ENCODE project and ReMap database, to investigate the transcriptional activity within a specific intronic region of the DDX11L2 gene (chr2:113599028-113603778, hg38 assembly). Our analysis reveals the co-localization of key epigenetic marks, candidate cis-regulatory elements (cCREs), and RNA Polymerase II binding, providing robust evidence for an active alternative promoter within this region. These findings underscore the complex regulatory landscape of the human genome, even within annotated pseudogenes.

    1. Introduction

    Gene expression is a tightly regulated process essential for cellular function, development, and disease. A critical step in gene expression is transcription initiation, primarily mediated by RNA Polymerase II (Pol II) in eukaryotes. Transcription initiation typically occurs at promoter regions, which are DNA sequences located upstream of a gene’s coding sequence. However, a growing body of evidence indicates the widespread use of alternative promoters, which can initiate transcription from different genomic locations within or outside of a gene’s canonical promoter, leading to diverse transcript isoforms and complex regulatory patterns [1].

    The DDX11L2 gene, located on human chromosome 2, is annotated as a DEAD/H-box helicase 11 like 2 pseudogene. Pseudogenes are generally considered non-functional copies of protein-coding genes that have accumulated mutations preventing their translation into functional proteins. Despite this annotation, some pseudogenes have been found to play active regulatory roles, for instance, by producing non-coding RNAs or acting as cis-regulatory elements [2]. Previous research has suggested the presence of an active promoter within an intronic region of DDX11L2, often discussed in the context of human chromosome evolution [3].

    This study aims to independently verify the transcriptional activity of this specific intronic region of DDX11L2 by analyzing comprehensive genomic and epigenomic datasets available through the UCSC Genome Browser. We specifically investigate the presence of key epigenetic hallmarks of active promoters, the classification of cis-regulatory elements, and direct evidence of RNA Polymerase II binding.

    2. Materials and Methods

    2.1 Data Sources

    Genomic and epigenomic data were accessed and visualized using the UCSC Genome Browser (genome.ucsc.edu), utilizing the Human Genome assembly hg38. The analysis focused on the genomic coordinates chr2:113599028-113603778, encompassing the DDX11L2 gene locus.

    The following data tracks were enabled and examined in detail:

    ENCODE Candidate cis-Regulatory Elements (cCREs): This track integrates data from multiple ENCODE assays to classify genomic regions based on their regulatory potential. The “full” display mode was selected to visualize the color-coded classifications (red for promoter-like, yellow for enhancer-like, blue for CTCF-bound) [4].

    Layered H3K27ac: This track displays ChIP-seq signal for Histone H3 Lysine 27 acetylation, a histone modification associated with active promoters and enhancers. The “full” display mode was used to visualize peak enrichment [5].

    ReMap Atlas of Regulatory Regions (RNA Polymerase II ChIP-seq): This track provides a meta-analysis of transcription factor binding sites from numerous ChIP-seq experiments. The “full” display mode was selected, and the sub-track specifically for “Pol2” (RNA Polymerase II) was enabled to visualize its binding profiles [6].

    DNase I Hypersensitivity Clusters: This track indicates regions of open chromatin, which are accessible to regulatory proteins. The “full” display mode was used to observe DNase I hypersensitive sites [4].

    GENCODE Genes and RefSeq Genes: These tracks were used to visualize the annotated gene structure of DDX11L2, including exons and introns.

    2.2 Data Analysis

    The analysis involved visual inspection of the co-localization of signals across the enabled tracks within the DDX11L2 gene region. Specific attention was paid to the first major intron, where previous studies have suggested an alternative promoter. The presence and overlap of red “Promoter-like” cCREs, H3K27ac peaks, and Pol2 binding peaks were assessed as indicators of active transcriptional initiation. The names associated with the cCREs (e.g., GSE# for GEO accession, transcription factor, and cell line) were noted to understand the experimental context of their classification.

    3. Results

    Analysis of the DDX11L2 gene locus on chr2 (hg38) revealed consistent evidence supporting the presence of an active alternative promoter within its first intron.

    3.1 Identification of Promoter-like cis-Regulatory Elements:

    The ENCODE cCREs track displayed multiple distinct red bars within the first major intron of DDX11L2, specifically localized around chr2:113,601,200 – 113,601,500. These red cCREs are computationally classified as “Promoter-like,” indicating a high likelihood of promoter activity based on integrated epigenomic data. Individual cCREs were associated with specific experimental identifiers, such as “GSE46237.TERF2.WI-38VA13,” “GSE102884.SMC3.HeLa-Kyoto_WAPL_PDS-depleted,” and “GSE102884.SMC3.HeLa-Kyoto_PDS5-depleted.” These labels indicate that the “promoter-like” classification for these regions was supported by ChIP-seq experiments targeting transcription factors like TERF2 and SMC3 in various cell lines (WI-38VA13, HeLa-Kyoto, and HeLa-Kyoto under specific depletion conditions).

    3.2 Enrichment of Active Promoter Histone Marks:

    A prominent peak of H3K27ac enrichment was observed in the Layered H3K27ac track. This peak directly overlapped with the cluster of red “Promoter-like” cCREs, spanning approximately chr2:113,601,200 – 113,601,700. This strong H3K27ac signal is a hallmark of active regulatory elements, including promoters.

    3.3 Direct RNA Polymerase II Binding:

    Crucially, the ReMap Atlas of Regulatory Regions track, specifically the sub-track for RNA Polymerase II (Pol2) ChIP-seq, exhibited a distinct peak that spatially coincided with both the H3K27ac enrichment and the “Promoter-like” cCREs in the DDX11L2 first intron. This direct binding of Pol2 is a definitive indicator of transcriptional machinery engagement at this site.

    3.4 Open Chromatin State:

    The presence of active histone marks and Pol2 binding strongly implies an open chromatin configuration. Examination of the DNase I Hypersensitivity Clusters track reveals a corresponding peak, further supporting the accessibility of this region for transcription factor binding and initiation.

    4. Discussion

    The integrated genomic data from the UCSC Genome Browser provides compelling evidence for an active alternative promoter within the first intron of the human DDX11L2 gene. The co-localization of “Promoter-like” cCREs, robust H3K27ac signals, and direct RNA Polymerase II binding collectively demonstrates that this region is actively engaged in transcriptional initiation.

    The classification of cCREs as “promoter-like” (red bars) is based on a sophisticated integration of multiple ENCODE assays, reflecting a comprehensive biochemical signature of active promoters. The specific experimental identifiers associated with these cCREs (e.g., ERG, TERF2, SMC3 ChIP-seq data) highlight the diverse array of transcription factors that can bind to and contribute to the regulatory activity of a promoter. While ERG, TERF2, and SMC3 are not RNA Pol II itself, their presence at this locus, in conjunction with Pol II binding and active histone marks, indicates a complex regulatory network orchestrating transcription from this alternative promoter.

    The strong H3K27ac peak serves as a critical epigenetic signature, reinforcing the active state of this promoter. H3K27ac marks regions of open chromatin that are poised for, or actively undergoing, transcription. Its direct overlap with Pol II binding further strengthens the assertion of active transcription initiation.

    The direct observation of RNA Polymerase II binding is the most definitive evidence for transcriptional initiation. Pol II is the core enzyme responsible for synthesizing messenger RNA (mRNA) and many non-coding RNAs. Its presence at a specific genomic location signifies that the cellular machinery for transcription is assembled and active at that site.

    The findings are particularly interesting given that DDX11L2 is annotated as a pseudogene. This study adds to the growing body of literature demonstrating that pseudogenes, traditionally considered genomic “fossils,” can acquire or retain functional regulatory roles, including acting as active promoters for non-coding RNAs or influencing the expression of neighboring genes [2]. The presence of an active alternative promoter within DDX11L2 suggests a more intricate regulatory landscape than implied by its pseudogene annotation alone.

    5. Conclusion

    Through the integrated analysis of ENCODE and ReMap data on the UCSC Genome Browser, this study provides strong evidence that an intronic region within the human DDX11L2 gene functions as an active alternative promoter. The co-localization of “Promoter-like” cCREs, high H3K27ac enrichment, and direct RNA Polymerase II binding collectively confirms active transcriptional initiation at this locus. These findings contribute to our understanding of the complex regulatory architecture of the human genome and highlight the functional potential of regions, such as pseudogenes, that may have been previously overlooked.

    References

    [1] Carninci P. and Tagami H. (2014). The FANTOM5 project and its implications for mammalian biology. F1000Prime Reports, 6: 104.

    [2] Poliseno L. (2015). Pseudogenes: Architects of complexity in gene regulation. Current Opinion in Genetics & Development, 31: 79-84.

    [3] Tomkins J.P. (2013). Alleged Human Chromosome 2 “Fusion Site” Encodes an Active DNA Binding Domain Inside a Complex and Highly Expressed Gene—Negating Fusion. Answers Research Journal, 6: 367–375. (Note: While this paper was a starting point, the current analysis uses independent data for verification).

    [4] ENCODE Project Consortium. (2012). An integrated encyclopedia of DNA elements in the human genome. Nature, 489(7414): 57–74.

    [5] Rada-Iglesias A., et al. (2011). A unique chromatin signature identifies active enhancers and genes in human embryonic stem cells. Nature Cell Biology, 13(9): 1003–1013.

    [6] Chèneby J., et al. (2018). ReMap 2018: an updated atlas of regulatory regions from an integrative analysis of DNA-binding ChIP-seq experiments. Nucleic Acids Research, 46(D1): D267–D275.