Creation Questions

Tag: science

  • An Argument for Agent Causation in the Origin of DNA’s Information

    An Argument for Agent Causation in the Origin of DNA’s Information

    NOTE: This is a design argument inspired by Stephen Meyer‘s design argument from DNA. Importantly, specified complexity is changed for semiotic code (which I feel is more precise) and intelligent design is changed to agent causation (which is more preferencial).

    This argument posits that the very nature of the information encoded in DNA, specifically its structure as a semiotic code, necessitates an intelligent cause in its origin. The argument proceeds by establishing two key premises: first, that semiotic codes inherently require intelligent (agent) causation for their creation, and second, that DNA functions as a semiotic code.

    Premise 1: The Creation of a Semiotic Code Requires Agent Causation (Intelligence)

    A semiotic code is a system designed for conveying meaning through the use of signs. At its core, a semiotic code establishes a relationship between a signifier (the form the sign takes, e.g., a word, a symbol, a sequence) and a signified (the concept or meaning represented). Crucially, in a semiotic code, this relationship is arbitrary or conventional, not based on inherent physical or chemical causation between the signifier and the signified. This requires an interpretive framework – a set of rules or a system – that is independent of the physical properties of the signifier itself, providing the means to encode and decode the meaning. The meaning resides not in the physical signal, but in its interpretation according to the established code.

    Consider examples like human language, musical notation, or traffic signals. The sound “stop” or the sequence of letters S-T-O-P has no inherent physical property that forces a vehicle to cease motion. A red light does not chemically or physically cause a car to stop; it is a conventionally assigned symbol that, within a shared interpretive framework (traffic laws and driver understanding), signifies a command to stop. This is distinct from a natural sign, such as smoke indicating fire. In this case, the relationship between smoke and fire is one of direct, necessary physical causation (combustion produces smoke). While an observer can interpret smoke as a sign of fire, the connection itself is a product of natural laws, existing independently of any imposed code or interpretive framework.

    The capacity to create and utilize a system where arbitrary symbols reliably and purposefully convey specific meanings requires more than just physical processes. It requires the ability to:

    Conceive of a goal: To transfer specific information or instruct an action.

    Establish arbitrary conventions: To assign meaning to a form (signifier) where no inherent physical link exists to the meaning (signified).

    Design an interpretive framework: To build or establish a system of rules or machinery that can reliably encode and decode these arbitrary relationships.

    Implement this system for goal-directed action: To use the code and framework to achieve the initial goal of information transfer and subsequent action based on that information.

    This capacity to establish arbitrary, rule-governed relationships for the purpose of communication and control is what we define as intelligence in this context. The creation of a semiotic code is an act of imposing abstract order and meaning onto physical elements according to a plan or intention. Such an act requires agent causation – causation originating from an entity capable of intentionality, symbolic representation, and the design of systems that operate based on abstract rules, rather than solely from the necessary interactions of physical forces (event causation).

    Purely natural, undirected physical processes can produce complex patterns and structures driven by energy gradients, chemical affinities, or physical laws (like crystal formation, which is a direct physical consequence of electrochemical forces and molecular structure, lacking arbitrary convention, an independent interpretive framework, or symbolic representation). However, they lack the capacity to establish arbitrary conventions where the link between form and meaning is not physically determined, nor can they spontaneously generate an interpretive framework that operates based on such non-physical rules for goal-directed purposes. Therefore, the existence of a semiotic code, characterized by arbitrary signifier-signified links and an independent interpretive framework for goal-directed information transfer, provides compelling evidence for the involvement of intelligence in its origin.

    Premise 2: DNA Functions as a Semiotic Code

    The genetic code within DNA exhibits the key characteristics of a semiotic code as defined above. Sequences of nucleotides (specifically, codons on mRNA) act as signifiers. The signifieds are specific amino acids, which are the building blocks of proteins.

    Crucially, the relationship between a codon sequence and the amino acid it specifies is not one of direct chemical causation. A codon (e.g., AUG) does not chemically synthesize or form the amino acid methionine through a direct physical reaction dictated by the codon’s molecular structure alone. Amino acid synthesis occurs through entirely separate biochemical pathways involving dedicated enzymes.

    Instead, the codon serves as a symbolic signal that is interpreted by the complex cellular machinery of protein synthesis – the ribosomes, transfer RNAs (tRNAs), and aminoacyl-tRNA synthetases. This machinery constitutes the interpretive framework.

    Here’s how it functions as a semiotic framework:

    • Arbitrary/Conventional Relationship: The specific assignment of a codon triplet to a particular amino acid is largely a matter of convention. While there might be some historical or biochemical reasons that biased the code’s evolution, the evidence from synthetic biology, where scientists have successfully engineered bacteria with different codon-amino acid assignments, demonstrates that the relationship is not one of necessary physical linkage but of an established (and in this case, artificially modified) rule or convention. Different codon assignments could work, but the system functions because the cellular machinery reliably follows the established rules of the genetic code.
    • Independent Interpretive Framework: The translation machinery (ribosome, tRNAs, synthetases) is a complex system that reads the mRNA sequence (signifier) and brings the correct amino acid (signified) to the growing protein chain, according to the rules encoded in the structure and function of the tRNAs and synthetases. The meaning (“add this amino acid now”) is not inherent in the chemical properties of the codon itself but resides in how the interpretive machinery is designed to react to that codon. This machinery operates independently of direct physical causation by the codon itself to create the amino acid; it interprets the codon as an instruction within the system’s logic.
    • Symbolic Representation: The codon stands for an amino acid; it is a symbol representing a unit of meaning within the context of protein assembly. The physical form (nucleotide sequence) is distinct from the meaning it conveys (which amino acid to add). This is analogous to the word “cat” representing a feline creature – the sound or letters don’t physically embody the cat but symbolize the concept.

    Therefore, DNA, specifically the genetic code and the translation system that interprets it, functions as a sophisticated semiotic code. It involves arbitrary relationships between signifiers (codons) and signifieds (amino acids), mediated by an independent interpretive framework (translation machinery) for the purpose of constructing functional proteins (goal-directed information transfer).

    Conclusion: Therefore, DNA Requires Agent Causation in its Origin

    Based on the premises established:

    1. The creation of a semiotic code, characterized by arbitrary conventions, an independent interpretive framework, and symbolic representation for goal-directed information transfer, requires the specific capacities associated with intelligence and agent causation (intentionality, abstraction, rule-creation, system design).
    2. DNA, through the genetic code and its translation machinery, functions as a semiotic code exhibiting these very characteristics.

    It logically follows that the origin of DNA’s semiotic structure requires agent causation. The arbitrary nature of the code assignments and the existence of a complex system specifically designed to read and act upon these arbitrary rules, independent of direct physical necessity between codon and amino acid, are hallmarks of intelligent design, not the expected outcomes of undirected physical or chemical processes.

    Addressing Potential Objections:

    • Evolution and Randomness: While natural selection can act on variations in existing biological systems, it requires a self-replicating system with heredity – which presupposes the existence of a functional coding and translation system. Natural selection is a filter and modifier of existing information; it is not a mechanism for generating a semiotic code from scratch. Randomness, by definition, lacks the capacity to produce the specified, functional, arbitrary conventions and the integrated interpretive machinery characteristic of a semiotic code. The challenge is not just sequence generation, but the origin of the meaningful, rule-governed relationship between sequences and outcomes, and the system that enforces these rules.
    • “Frozen Accident” and Abiogenesis Challenges: Hypotheses about abiogenesis and early life (like the RNA world) face significant hurdles in explaining the origin of this integrated semiotic system. The translation machinery is a highly complex and interdependent system (a “chicken-and-and egg” problem where codons require tRNAs and synthetases to be read, but tRNAs and synthetases are themselves encoded by and produced through this same system). The origin of the arbitrary codon-amino acid assignments and the simultaneous emergence of the complex machinery to interpret them presents a significant challenge for gradual, undirected assembly driven solely by chemical or physical affinities.
    • Biochemical Processes vs. Interpretation: The argument does not claim that a ribosome is a conscious entity “interpreting” in the human sense. Instead, it argues that the system it is part of (the genetic code and translation machinery) functions as an interpretive framework because it reads symbols (codons) and acts according to established, arbitrary rules (the genetic code’s assignments) to produce a specific output (amino acid sequence), where this relationship is not based on direct physical necessity but on a mapping established by the code’s design. This rule-governed, symbolic mapping, independent of physical causation between symbol and meaning, is the defining feature of a semiotic code requiring an intelligence to establish the rules and the system.
    • God-of-the-Gaps: This argument is not based on mere ignorance of a natural explanation. It is a positive argument based on the nature of the phenomenon itself. Semiotic codes, wherever their origin is understood (human language, computer code), are the products of intelligent activity involving the creation and implementation of arbitrary conventions and interpretive systems for goal-directed communication. The argument posits that DNA exhibits these defining characteristics and therefore infers a similar type of cause in its origin, based on a uniformity of experience regarding the necessary preconditions for semiotic systems.

    In conclusion, the sophisticated, arbitrary, and rule-governed nature of the genetic code and its associated translation machinery point to it being a semiotic system. Based on the inherent requirements for creating such a system—namely, the capacities for intentionality, symbolic representation, rule-creation, and system design—the origin of DNA’s information is best explained by the action of an intelligent agent.

  • Chromosome 2 Fusion: Evidence Out Of Thin Air?

    Chromosome 2 Fusion: Evidence Out Of Thin Air?

    The story is captivating and frequently told in biology textbooks and popular science: humans possess 46 chromosomes while our alleged closest relatives, chimpanzees and other great apes, have 48. The difference, evolutionists claim, is due to a dramatic event in our shared ancestry – the fusion of two smaller ape chromosomes to form the large human Chromosome 2. This “fusion hypothesis” is often presented as slam-dunk evidence for human evolution from ape-like ancestors. But when we move beyond the narrative and scrutinize the actual genetic data, does the evidence hold up? A closer look suggests the case for fusion is far from conclusive, perhaps even bordering on evidence conjured “out of thin air.”

    The fusion model makes specific predictions about what we should find at the junction point on Chromosome 2. If two chromosomes, capped by protective telomere sequences, fused end-to-end, we’d expect to see a characteristic signature: the telomere sequence from one chromosome (repeats of TTAGGG) joined head-to-head with the inverted telomere sequence from the other (repeats of CCCTAA). These telomeric repeats typically number in the thousands at chromosome ends.  

    The Missing Telomere Signature

    When scientists first looked at the proposed fusion region (locus 2q13), they did find some sequences resembling telomere repeats (IJdo et al., 1991). This was hailed as confirmation. However, the reality is much less convincing than proponents suggest.

    Instead of thousands of ordered repeats forming a clear TTAGGG…CCCTAA structure, the site contains only about 150 highly degraded, degenerate telomere-like sequences scattered within an ~800 base pair region. Searching a much larger 64,000 base pair region yields only 136 instances of the core TTAGGG hexamer, far short of a telomere’s structure. Crucially, the orientation is often wrong – TTAGGG motifs appear where CCCTAA should be, and vice-versa. This messy, sparse arrangement hardly resembles the robust structure expected from even an ancient, degraded fusion event.

    Furthermore, creationist biologist Dr. Jeffrey Tomkins discovered that this alleged fusion site is not merely inactive debris; it falls squarely within a functional region of the DDX11L2 gene, likely acting as a promoter or regulatory element (Tomkins, 2013). Why would a supposedly non-functional scar from an ancient fusion land precisely within, and potentially regulate, an active gene? This finding severely undermines the idea of it being simple evolutionary leftovers.

    The Phantom Centromere

    A standard chromosome has one centromere. Fusing two standard chromosomes would initially create a dicentric chromosome with two centromeres – a generally unstable configuration. The fusion hypothesis thus predicts that one of the original centromeres must have been inactivated, leaving behind a remnant or “cryptic” centromere on Chromosome 2.  

    Proponents point to alpha-satellite DNA sequences found around locus 2q21 as evidence for this inactivated centromere, citing studies like Avarello et al. (1992) and the chromosome sequencing paper by Hillier et al. (2005). But this evidence is weak. Alpha-satellite DNA is indeed common near centromeres, but it’s also found abundantly elsewhere throughout the genome, performing various functions.  

    The Avarello study, conducted before full genome sequencing, used methods that detected alpha-satellite DNA generally, not functional centromeres specifically. Their results were inconsistent, with the signal appearing in less than half the cells examined – hardly the signature of a definite structure. Hillier et al. simply noted the presence of alpha-satellite tracts, but these specific sequences are common types found on nearly every human chromosome and show no unique similarity or phylogenetic clustering with functional centromere sequences. There’s no compelling structural or epigenetic evidence marking this region as a bona fide inactivated centromere; it’s simply a region containing common repetitive DNA.

    Uniqueness and the Mutation Rate Fallacy

    Adding to the puzzle, the specific short sequence often pinpointed as the precise fusion point isn’t unique. As can be demonstrated using the BLAT tool, this exact sequence appears on human Chromosomes 7, 19, and the X and Y chromosomes. If this sequence is the unique hallmark of the fusion event, why does it appear elsewhere? The evolutionary suggestion that these might be remnants of other, even more ancient fusions is pure speculation without a shred of supporting evidence.

    The standard evolutionary counter-argument to the lack of clear telomere and centromere signatures is degradation over time. “The fusion happened millions of years ago,” the reasoning goes, “so mutations have scrambled the evidence.” However, this explanation crumbles under the weight of actual mutation rates.

    Using accepted human mutation rate estimates (Nachman & Crowell, 2000) and the supposed 6-million-year timeframe since divergence from chimps, we can calculate that the specific ~800 base pair fusion region would be statistically unlikely to have suffered even one mutation during that entire period! The observed mutation rate is simply far too low to account for the dramatic degradation required to turn thousands of pristine telomere repeats and a functional centromere into the sequences we see today. Ironically, the known mutation rate argues against the degradation explanation needed to salvage the fusion hypothesis.

    Common Design vs. Common Ancestry

    What about the general similarity in gene order (synteny) between human Chromosome 2 and chimpanzee chromosomes 2A and 2B? While often presented as strong evidence for fusion, similarity does not automatically equate to ancestry. An intelligent designer reusing effective plans is an equally valid, if not better, explanation for such similarities. Moreover, the “near identical” claim is highly exaggerated; large and significant differences exist in gene content, control regions, and overall size, especially when non-coding DNA is considered (Tomkins, 2011, suggests overall similarity might be closer to 70%). This makes sense when considering that coding regions function to provide the recepies for proteins (which similar life needs will share similarly).

    Conclusion: A Story Of Looking for Evidence

    When the genetic data for human Chromosome 2 is examined without the pre-commitment to an evolutionary narrative, the evidence for the fusion event appears remarkably weak. So much so that it begs the question, was this a mad-dash to explain the blatent differences in the genomes of Humans and Chimps? The expected telomere signature is absent, replaced by a short, jumbled sequence residing within a functional gene region. The evidence for a second, inactivated centromere relies on the presence of common repetitive DNA lacking specific centromeric features. The supposed fusion sequence isn’t unique, and known mutation rates are woefully insufficient to explain the degradation required by the evolutionary model over millions of years.

    The chromosome 2 fusion story seems less like a conclusion drawn from compelling evidence and more like an interpretation imposed upon ambiguous data to fit a pre-existing belief in human-ape common ancestry. The scientific data simply does not support the narrative. Perhaps it’s time to acknowledge that the “evidence” for this iconic fusion event may indeed be derived largely “out of thin air.”

    References:

  • Examining Claims of Macroevolution and Irreducible Complexity:

    Examining Claims of Macroevolution and Irreducible Complexity:

    A Creationist Perspective

    The debate surrounding the origin and diversification of life continues, with proponents of neo-Darwinian evolution often citing observed instances of speciation and adaptations as evidence for macroevolution and the gradual development of complex biological systems. A recent “MEGA POST” on Reddit’s r/DebateEvolution presented several cases purported to demonstrate these processes, challenging the creationist understanding of life’s history. This article will examine these claims from a young-Earth creationist viewpoint.

    The original post defined key terms, stating, “Macroevolution ~ variations in heritable traits in populations with multiple species over time. Speciation marks the start of macroevolution.” However, creationists distinguish between microevolution – variation and speciation within a created kind – and macroevolution – the hypothetical transition between fundamentally different kinds of organisms. While the former is observable and acknowledged, the latter lacks empirical support and the necessary genetic mechanisms.

    Alleged Cases of Macroevolution:

    The post presented eleven cases as evidence of macroevolution.

    1. Lizards evolving placentas: The observation of reproductive isolation in Zootoca vivipara with different modes of reproduction was highlighted. The author noted, “(This is probably my favourite example of the bunch, as it shows a highly non-trivial trait emerging, together with isolation, speciation and selection for the new trait to boot.)” From a creationist perspective, the development of viviparity within lizards likely involves the expression or modification of pre-existing genetic information within the lizard kind. This adaptation and speciation do not necessitate the creation of novel genetic information required for a transition to a different kind of organism.

    2. Fruit flies feeding on apples: The divergence of the apple maggot fly (Rhagoletis pomonella) into host-specific groups was cited as sympatric speciation. This adaptation to different host plants and the resulting reproductive isolation are seen as microevolutionary changes within the fruit fly kind, utilizing the inherent genetic variability.  

    3. London Underground mosquito: The adaptation of Culex pipiens f. molestus to underground environments was presented as allopatric speciation. The observed physiological and behavioral differences, along with reproductive isolation, are consistent with diversification within the mosquito kind due to environmental pressures acting on the existing gene pool.  

    4. Multicellularity in Green Algae: The lab observation of obligate multicellularity in Chlamydomonas reinhardtii under predation pressure was noted. The author stated this lays “the groundwork for de novo multicellularity.” While this is an interesting example of adaptation, the transition from simple coloniality to complex, differentiated multicellularity, as seen in plants and animals, requires a significant increase in genetic information and novel developmental pathways. The presence of similar genes across different groups could point to a common designer employing similar modules for diverse functions.  

    5. Darwin’s Finches, revisited 150 years later: Speciation in the “Big Bird lineage” due to environmental pressures was discussed. This classic example of adaptation and speciation on the Galapagos Islands demonstrates microevolutionary changes within the finch kind, driven by natural selection acting on existing variations in beak morphology.  

    6 & 7. Salamanders and Greenish Warblers as ring species: These examples of geographic variation leading to reproductive isolation were presented as evidence of speciation. While ring species illustrate gradual divergence, the observed changes occur within the salamander and warbler kinds, respectively, and do not represent transitions to fundamentally different organisms.  

    8. Hybrid plants and polyploidy: The formation of Tragopogon miscellus through polyploidy was cited as rapid speciation. The author noted that crossbreeding “exploits polyploidy…to enhance susceptibility to selection for desired traits.” Polyploidy involves the duplication of existing chromosomes and the combination of genetic material from closely related species within the plant kingdom. This mechanism facilitates rapid diversification but does not generate the novel genetic information required for macroevolutionary transitions.  

    9. Crocodiles and chickens growing feathers: The manipulation of gene expression leading to feather development in these animals was discussed. The author suggested this shows “how birds are indeed dinosaurs and descend within Sauropsida.” Creationists interpret the shared genetic toolkit and potential for feather development within reptiles and birds as evidence of a common design within a broader created kind, rather than a direct evolutionary descent in the Darwinian sense.  

    10. Endosymbiosis in an amoeba: The observation of a bacterium becoming endosymbiotic within an amoeba was presented as analogous to the origin of organelles. Creationists propose that organelles were created in situ with their host cells, designed for symbiotic relationships from the beginning. The observed integration is seen as a function of this initial design.

    11. Eurasian Blackcap: The divergence in migratory behavior and morphology leading towards speciation was highlighted. This represents microevolutionary adaptation within the bird kind in response to environmental changes.

    Addressing “Irreducible Complexity”:

    The original post also addressed the concept of irreducible complexity with five counter-examples.

    1. E. Coli Citrate Metabolism in the LTEE: The evolution of citrate metabolism was presented as a refutation of irreducible complexity. The author noted that this involved “gene duplication, and the duplicate was inserted downstream of an aerobically-active promoter.” While this demonstrates the emergence of a new function, it occurred within the bacterial kind and involved the modification and duplication of existing genetic material. Therefore, is no evidence here to suggest an evolutionary pathway for the origin of citrate metabolism.

    2. Tetherin antagonism in HIV groups M and O: The different evolutionary pathways for overcoming tetherin resistance were discussed. Viruses, with their rapid mutation rates and unique genetic mechanisms, present a different case study than complex cellular organisms. This is not analogous in the slightest.

    3. Human lactose tolerance: The evolution of lactase persistence was presented as a change that is “not a loss of regulation or function.” This involves a regulatory mutation affecting the expression of an existing gene within the human genome. Therefore, it’s not a gain either. This is just a semantic game.

    4. Re-evolution of bacterial flagella: The substitution of a key regulatory protein for flagellum synthesis was cited. The author noted this is “an incredibly reliable two-step process.” While this demonstrates the adaptability of bacterial systems, the flagellum itself remains a complex structure with numerous interacting components – none of said components have gained or lost the cumulative necessary functions.

    5. Ecological succession: The development of interdependent ecosystems was presented as a challenge to irreducible complexity. However, ecological succession describes the interactions and development of communities of existing organisms, not the origin of the complex biological systems within those organisms.  

    Conclusion:

    While the presented cases offer compelling examples of adaptation and speciation, we interpret these observations as occurring within the boundaries of created kinds, utilizing the inherent genetic variability designed within them. These examples do not provide conclusive evidence for macroevolution – the transition between fundamentally different kinds of organisms – nor do they definitively refute the concept of irreducible complexity in the origin of certain biological systems. The fact that so many of these are, if not neutral, loss-of-function or loss-of-information mutations creates a compelling case for creation as the inference to the best explanation. The creationist model, grounded in the historical robustness of the Biblical account and supported by scientific evidence (multiple cross-disciplinary lines), offers a coherent alternative explanation for the diversity and complexity of life. As the original post concluded,

    “if your only response to the cases of macroevolution are ‘it’s still a lizard’, ‘it’s still a fly you idiot’ etc, congrats, you have 1) sorely missed the point and 2) become an evolutionist now!”

    However, the point is not that change doesn’t occur (we expect that on our model), but rather the kind and extent of that change, which, from a creationist perspective, remains within divinely established explanatory boundaries of the creation model and contradicts a universal common descent model.

    References:

    Teixeira, F., et al. (2017). The evolution of reproductive isolation during a rapid adaptive radiation in alpine lizards. Proceedings of the National Academy of Sciences, 114(12), E2386-E2393. https://doi.org/10.1073/pnas.1635049100

    Fonseca, D. M., et al. (2023). Rapid Speciation of the London Underground Mosquito Culex pipiens molestus. ResearchGate. https://doi.org/10.13140/RG.2.2.23813.22247

    Grant, P. R., & Grant, B. R. (2017). Texas A&M professor’s study of Darwin’s finches reveals species can evolve in two generations. Texas A&M Today. https://stories.tamu.edu/news/2017/12/01/texas-am-professors-study-of-darwins-finches-reveals-species-can-evolve-in-two-generations/

    Feder, J. L., et al. (1997). Allopatric host race formation in sympatric hawthorn maggot flies. Proceedings of the National Academy of Sciences, 94(15), 7761-7766. https://doi.org/10.1073/pnas.94.15.7761

    Tishkoff, S. A., et al. (2013). Convergent adaptation of human lactase persistence in Africa and Europe. Nature Genetics, 45(3), 233-240. https://doi.org/10.1038/ng.2529 (Note: While the URL provided redirects to PMC, the original publication is in Nature Genetics. I have cited the primary source.)

  • Tiny Water Fleas, Big Questions About Evolution

    Tiny Water Fleas, Big Questions About Evolution

    Scientists recently spent a decade tracking the genetics of a tiny water creature called Daphnia pulex, a type of water flea. What they found is stirring up a lot of questions about how evolution really works.  

    Imagine you’re watching a group of people over ten years, noting every little change in their appearance. Now, imagine doing that with the genetic code of hundreds of water fleas. That’s essentially what these researchers did. They looked at how the frequencies of different versions of genes (alleles) changed from year to year.

    What they discovered was surprising. On average, most of the genetic variations they tracked didn’t seem to be under strong selection at all. In other words, most of the time, the different versions of genes were more or less equally successful. It’s like watching people over ten years and finding that, on average, nobody’s hair color really changed much.

    However, there was a catch. Even though the average trend was “no change,” there were a lot of ups and downs from year to year. One year, a particular gene version might be slightly more common, and the next year, it might be slightly less common. This means that selective pressures—the forces that push evolution—were constantly changing.

    Think of it like the weather. One day it’s sunny, the next it’s rainy, but the average temperature over the year might be pretty mild. The researchers called this “fluctuating selection.”

    They also found that these genetic changes weren’t happening randomly across the whole genome. Instead, they were happening in small, linked groups of genes. These groups seemed to be working together, like little teams within the genome.  

    So, what does this all mean?

    Well, for one thing, it challenges the traditional idea of gradual, steady evolution via natural selection. If evolution were a slow, constant march forward, you’d expect to see consistent changes in gene frequencies over time being promoted by the environment. But that’s not what they found. Instead, they saw a lot of back-and-forth, with selection pressures constantly changing and equalizing at a net-zero.  

    From a design perspective, this makes a lot of sense. Instead of random changes slowly building up over millions of years, this data suggests that organisms are incredibly adaptable, designed to handle constant environmental shifts. The “teams” of linked genes working together look a lot like pre-programmed modules, ready to respond to whatever challenges the environment throws their way.

    The fact that most gene variations are “quasi-neutral,” meaning they don’t really affect survival on average, also fits with the idea of a stable, created genome. Rather than constantly evolving new features, organisms might be designed with a wide range of genetic options, ready to be used when needed.

    This study on tiny water fleas is a reminder that evolution is a lot more complex than we often think. It’s not just about random mutations and gradual changes. It’s about adaptability, flexibility, and a genome that’s ready for anything. And maybe, just maybe, it’s about design.

    (Based on: The genome-wide signature of short-term temporal selection)

  • How Created Heterozygosity Explains Genetic Variation

    How Created Heterozygosity Explains Genetic Variation

    A Conceptual Introduction:

    The study of genetics reveals a stunning tapestry of diversity within the living world. While evolutionary theory traditionally attributes this variation to random mutations accumulated over vast stretches of time, a creationist perspective offers a compelling alternative: Created Heterozygosity. This hypothesis proposes that God designed organisms with pre-existing genetic variability, allowing for adaptation and diversification within created kinds. This concept not only aligns with biblical accounts but also provides a more coherent explanation for observed genetic phenomena.

    The evolutionary narrative hinges on the power of mutations to generate novel genetic information. However, the overwhelming evidence points to the deleterious nature of most mutations. This can be seen in the famous Long-Term Evolutionary Experiments with E. coli. Notice, in the graphic below (Hofwegen, 2016), just how much information gets lost due to selection pressures and mutation. This is known as genetic entropy, the gradual degradation of the genome due to accumulated harmful mutations, poses a significant challenge to the idea that random mutations can drive the complexification of life. Furthermore, the sheer number of beneficial mutations required to explain the intricate design of living organisms strains credulity.

    “Genomic DNA sequencing revealed an amplification of the citT and dctA loci and DNA rearrangements to capture a promoter to express CitT, aerobically. These are members of the same class of mutations identified by the LTEE. We conclude that the rarity of the LTEE mutant was an artifact of the experimental conditions and not a unique evolutionary event. No new genetic information (novel gene function) evolved.”

    In contrast, Created Heterozygosity suggests that God, the master engineer, imbued organisms with a pre-programmed potential for variation. Just as human engineers design systems with built-in flexibility, God equipped his creation with the genetic resources necessary to adapt to diverse environments. This concept resonates with the biblical affirmation that God created organisms “according to their kinds,” implying inherent boundaries within which variation can occur. Recent research, such as the ENCODE project and studies on the dark proteome, has revealed an astonishing level of complexity and functionality within the genome, further supporting the idea of a designed system.

    Baraminology, the study of created kinds, provides empirical support for Created Heterozygosity. The rapid diversification observed within baramins, such as the canid or feline kinds, can be readily explained by the expression of pre-existing genetic information. For example, the diverse array of dog breeds can be traced back to the inherent genetic variability within the canine kind, rather than the accumulation of countless beneficial mutations.

    Of course, objections arise. The role of mutations in adaptation is often cited as evidence against Created Heterozygosity. However, certain mutations may represent the expression of designed backup systems or pre-programmed responses to environmental changes. Moreover, the vast majority of observed genetic variation can be attributed to the shuffling and expression of existing genetic information, rather than the creation of entirely new information.

    The implications for human genetics are profound. Created Heterozygosity elegantly explains the high degree of genetic variation within the human population, while remaining consistent with the biblical account of Adam and Eve as the progenitors of all humanity. Research on Mitochondrial Eve and Y-Chromosome Adam/Noah further supports the idea of a recent, common ancestry for all people.

    In conclusion, Created Heterozygosity provides a compelling framework for understanding genetic variation from a creationist perspective. By acknowledging the limitations of mutation-driven evolution and recognizing the evidence for designed diversity, we can appreciate the intricate wisdom of the Creator and the coherence of the biblical narrative. This concept invites us to explore the vastness of genetic diversity with a renewed sense of awe, recognizing the pre-programmed potential inherent in God’s magnificent creation.

    Citation:

    1. Van Hofwegen, D. J., Hovde, C. J., & Minnich, S. A. (2016). Rapid Evolution of Citrate Utilization by Escherichia coli by Direct Selection Requires citT and dctA. Journal of bacteriology, 198(7), 1022–1034.
  • The Limits of Evolution

    The Limits of Evolution

    Yesterday, a presentation by Dr. Rob Stadler took place on Dr. James Tour’s Youtube channel which has brought to light a compelling debate about the true extent of evolutionary capabilities. In their conversation, they delve into the levels of confidence in evolutionary evidence, revealing a stark contrast between observable, high-confidence microevolution and the extrapolated, low-confidence claims of macroevolutionary transitions. This distinction, which is based on the levels of evidence as understood in medical science, raises profound questions about the sufficiency of evolutionary mechanisms to explain the vast diversity of life.

    Dr. Stadler, author of “The Scientific Approach to Evolution,” presents a rigorous framework for evaluating scientific evidence. He outlines six criteria for high-confidence results: repeatability, direct measurability, prospectiveness, unbiasedness, assumption-free methodology, and reasonable claims. Applying these criteria to common evolutionary arguments, such as the fossil record, geographic distribution, vestigial organs, and comparative anatomy, Dr. Stadler reveals significant shortcomings. These lines of evidence, he argues, fall short of the high-confidence threshold. They are not repeatable, they cannot be directly measured, there is very little (if any) of predictive value , and most importantly they rely heavily on biased interpretation and assumption.

    However, the interview also highlights examples of high-confidence evolutionary studies. Experiments with E. coli bacteria, for instance, demonstrate the power of natural selection and mutation to drive small-scale changes within a population. These studies, repeatable and directly measurable, provide compelling evidence for microevolution. Yet, as Dr. Stadler emphasizes, extrapolating these observed changes to explain the origin of complex biological systems or the vast diversity of life is a leap of faith, not a scientific conclusion.

    The genetic differences between humans and chimpanzees further illustrate this point. While popular science often cites a 98% similarity, Dr. Stadler points out the significant differences, particularly in “orphan genes” and the regulatory functions of non-protein-coding DNA. These differences, he argues, challenge the notion of a simple, linear evolutionary progression.

    This aligns with the research of Dr. Douglas Axe, whose early work explored the probability of protein evolution. Axe’s findings suggest that the vast divergence between protein structures makes a common ancestor for all proteins highly improbable (Axe, 2000). This raises critical questions about the likelihood of orphan genes arising through random evolutionary processes alone, given the complexity and specificity of protein function.

    The core argument, as presented by Dr. Tour and Dr. Stadler, is not that evolution is entirely false. Rather, they contend that the high-confidence evidence supports only limited, small-scale changes, or microevolution. The leap to macroevolution, the idea that these small changes can accumulate to produce entirely new biological forms, appears to be a category error, based on our best evidence, and remains a low-confidence extrapolation.

    The video effectively presents case studies of evolution, demonstrating the observed limitations of evolutionary change. This evidence strongly suggests that evolutionary mechanisms are insufficient to account for the levels of diversity we observe today. The complexity of biological systems, the vast genetic differences between species, and the improbability of protein evolution challenge the core tenets of Neo-Darwinism and the Modern Synthesis.

    As Dr. Tour and Dr. Stadler articulate, a clear distinction must be made between observable, repeatable microevolution and the extrapolated, assumption-laden claims of macroevolution. While the former is supported by high-confidence evidence, the latter remains a subject of intense debate, demanding further scientific scrutiny.

    Works Cited

    • Tour, James, and Rob Stadler. “Evolution vs. Evidence: Are We Really 98% Chimp?” YouTube, uploaded by James Tour, https://www.youtube.com/watch?v=smTbYKJcnj8&t=2117s.
    • Axe, Douglas D. “Extreme functional sensitivity to conservative amino acid changes on enzyme exteriors.” Journal of Molecular Biology, vol. 301, no. 3, 2000, pp. 585-595.
  • My Top 5 Favorite Creation Podcasts

    My Top 5 Favorite Creation Podcasts

    As a creation enthusiast, I’m always on the lookout for resources that delve into the fascinating intersection of science and the biblical narrative. Podcasts have become a fantastic avenue for exploring these topics in depth, and I’ve curated a list of my top five favorites that consistently deliver insightful and engaging content.

    1. Let’s Talk Creation:

    This podcast is a gem for anyone seeking thoughtful and accessible discussions on creation science. Hosted by two PhD creationists, Todd Wood (baraminology) and Paul Garner (geology), “Let’s Talk Creation” offers bimonthly episodes that are both informative and easy to digest. What I appreciate most is their level-headed approach and their ability to break down complex scientific concepts into understandable terms. You’ll walk away from each episode with new insights and a deeper appreciation for the creation model.

    2. Standing For Truth:

    “Standing For Truth” is a powerhouse of creation content. With a vast database of interviews featuring subject experts from every relevant field, this podcast provides a comprehensive exploration of creation science. While it can get a little technical at times, the in-depth discussions and expert perspectives make it a valuable resource for those seeking a more rigorous understanding of the evidence.

    3. Creation Ministries International:

    For high-quality production and a wide variety of topics, “Creation Ministries International” delivers. Their videos are visually engaging and provide digestible explanations of creation science concepts including a wide range of scientists, philosophers, and theologians. While they may not always delve into the deepest technical details, their content is perfect for those seeking a solid overview of the evidence and its implications.

    4. Creation Unfolding:

    If you’re particularly interested in geology and paleontology, “Creation Unfolding” is a must-listen. The main host, Dr. K. P. Coulson, a well-researched geologist, brings a wealth of knowledge to the table, and the recurring guests provide diverse perspectives on these fascinating subjects. The laser-focused approach of this podcast makes it an invaluable resource for those seeking a deeper understanding of Earth’s history from a creationist perspective.

    5. Biblical Genetics:

    Dr. Robert Carter’s personal podcast, “Biblical Genetics,” is a treasure trove of information for anyone interested in the intersection of genetics and creation science. Dr. Carter, a renowned geneticist, tackles complex topics with clarity and precision, responding to popular-level content creators and professors with detailed explanations and analysis of technical papers. He skillfully guides listeners through intricate genetic concepts, making them accessible to a wider audience.


    These five podcasts represent a diverse range of perspectives and approaches to creation science. Whether you’re a seasoned creationist or just beginning to explore these topics, you’re sure to find valuable insights and engaging discussions within these podcasts.

  • Embryonic Similarities – Common Design, Not Common Descent

    Embryonic Similarities – Common Design, Not Common Descent

    For decades, textbook illustrations of Haeckel’s Embryos have been presented as a compelling visual argument for evolution. These side-by-side comparisons of vertebrate embryos, purportedly showing striking similarities in early developmental stages, have been used to argue for a shared evolutionary ancestry. However, a closer look reveals a story of misrepresentation and manipulation, rather than an accurate depiction of embryological evidence.

    Ernst Haeckel, a fervent supporter of Darwin’s theory, produced these drawings in the late 19th century. Yet, his illustrations were not faithful representations of actual embryos. He exaggerated similarities, omitted or altered developmental stages, and even used the same woodcut to represent different species. This deliberate manipulation aimed to bolster the concept of “recapitulation,” the now-discredited idea that embryonic development mirrors evolutionary history.

    The reality is that vertebrate embryos are far more distinct in their early stages than Haeckel portrayed. His illustrations were exposed as fraudulent even in his own time, yet they persisted in textbooks for generations, a testament to the power of visual propaganda in shaping scientific narratives.

    The argument that similarities in vertebrate embryos indicate a shared evolutionary history is challenged by several points.

    Challenging the “Recapitulation” Narrative

    One of the central tenets of the evolutionary argument is that embryonic development (“ontogeny”) reflects an organism’s evolutionary history (“phylogeny”). However, this concept, often summarized as “ontogeny recapitulates phylogeny,” is deeply flawed.

    • Embryonic Structures vs. Adult Structures: Embryonic features like pharyngeal slits and tails do not simply recapitulate the adult forms of ancestral organisms. Instead, they serve specific functions within the embryonic stage, often disappearing or transforming into entirely different structures in the adult. The embryonic mode of life is distinct from the adult mode.
    • “Recapitulation” is a Creationist Concept: The recognition of embryonic similarities predates Darwin. Creationists viewed these similarities as a “God-given ‘pattern of unification’ that reflected the unity of nature,” emphasizing a common Creator’s design rather than evolutionary lineage.
    • Unique Development: The unique eye development in lampreys, transitioning from larval eyespots to adult camera eyes, demonstrates that developmental pathways do not always follow a simple, linear evolutionary progression.
    • Order of Development: The occasional appearance of later-stage developmental features earlier in the embryonic process further complicates the evolutionary narrative.

    Genetic and Developmental Complexity

    The genetic and developmental complexity underlying embryonic similarities points to intelligent design:

    • Genetic Similarity: The fact that damage to the pax6 gene cascade results in the loss of a functional eye across diverse animal groups highlights a fundamental genetic similarity, but this similarity does not necessitate a shared evolutionary history. It speaks to a common design blueprint.
    • Complex Regulatory Systems: The development of complex structures like the eye involves thousands of interacting genes and intricate regulatory systems. Such complexity is more consistent with intelligent design than with random evolutionary processes.
    • Common Design: The similarities observed in vertebrate embryos can be readily explained as a reflection of a common design by an intelligent Creator. Just as an engineer might use similar design principles in different models, a Creator might employ common developmental strategies across various organisms.

    A Creationist Interpretation

    From a creationist perspective, the similarities in vertebrate embryos are not evidence of evolutionary transitions but rather manifestations of a unified design plan. The Creator used common design elements to achieve diverse functions in different organisms. This approach aligns with the concept of baraminology, which studies created kinds and acknowledges variations within those kinds.

    The argument that embryonic similarities exclusively support evolution overlooks the possibility of intelligent design. By recognizing the complexity of developmental processes and the historical context of these observations, we can appreciate the power of a creationist explanation.

  • The Flood: A Brief Outline

    The Flood: A Brief Outline

    The biblical account of a global flood, as described in Genesis, provides a powerful and coherent framework for understanding the Earth’s geological history. This model challenges the conventional uniformitarian timescale and offers compelling explanations for numerous geological phenomena. Today I will provide an outline of some of the most interesting lines of evidence for a worldwide flood.

    I. Rapid Sedimentation & Fossilization

    The fossil record and sedimentary formations reveal evidence of rapid burial and deposition, indicative of catastrophic processes:

    • Delicate & Detailed Preservation: Exquisite fossils (Solnhofen Limestone) and fragile charcoal preservation indicate swift burial.
    • Mass Burial Graveyards: Massive fossil graveyards (Siberian mammoths, Redwall Limestone nautiloids) suggest events within hours, not millennia.
    • Absence of Decay and Scavenging: Preserved soft tissues (collagen, DNA) defy millions of years, requiring rapid burial.
    • Polystrate Fossils: Upright trees spanning multiple layers demand swift sediment accumulation.
    • Paleohydraulic Evidence and Bedding Plane Concentrations: Rapid depositional events and undisturbed charcoal layers support quick burial.
    • Lack of Bioturbation: Sharp layer boundaries and minimal biological disturbance indicate rapid burial.
    • Turbulent Deposition and High Energy Transport: Mixed sediments and hydrodynamic models support high-energy, rapid deposition.
    • Pulsed deposition: Multiple layers indicate multiple rapid events.
    • Turbidites: Underwater landslides indicate rapid sediment deposition.
    • Folded sedimentary layers (with no metamorphosing): Layers folded while still soft indicate rapid formation.
    • Sand injectites: Rapid liquefaction and deposition of sand.
    • Iodine retention: Volatile element presence indicates rapid burial.

    II. Marine Transgression, Fossil Distribution, & Geological Formations Are Global

    The global distribution of marine fossils and geological formations indicates a worldwide flood:

    • Extensive Deposits: Lateral continuity of formations (Morrison, Coconino) suggests rapid, continent-wide deposition.
    • Lateral continuity: Sedimentary layers that spread across continents indicate rapid and large-scale deposition.
    • Indicators of Marine Deposition in All Sediments: Marine fossils and structures throughout the geological column.
    • Water Levels Exceeding Terrestrial Plates Globally: Scale of deposits indicates water levels far exceeding current boundaries.
    • High energy transport: Size of transported sediments is impossible to explain with slow processes.
    • Clear turbulent deposition: Many sedimentary layers show evidence of high-energy water flow.
    • Mega-sequences correlating as extremes of known mechanisms: The size and scope of these deposits are best explained by a global flood.
    • No erosion between layers: The absence of erosional channels is best explained by rapid sequential deposition.
    • Universal evidence of marine deposited sandstones: Continents once covered by water.

    III. Rapid Erosion & Post Flood Events

    Post-flood geological features reveal rapid erosion and catastrophic water action:

    • Channel Scablands: Vast channels in the Pacific Northwest indicate powerful, rapid water flow.
    • Underfit Rivers and Meltwater Channels: Massive channels with small rivers suggest immense post-flood meltwater flows.
    • Erosional Features and Missoula Floods Evidence: Gigantic potholes and evidence of massive floods demonstrate post-flood water action.
    • Geological Structures and Erratics: Structures influencing flood flow and erratic boulders indicate dynamic post-flood processes.
    • Massive interconnected surface channels: Large water flows across continents.
    • Massive erosion events like the Grand Staircase: Best explained by a global flood.
    • Laminated canyon edges: Indicate rapid canyon formation.

    IV. Rapid Chemical Processes & Young-Age Indicators

    Chemical processes and age indicators challenge conventional timelines, supporting a young Earth:

    • Radiometric Dating Assumptions: Unverifiable assumptions in dating methods question deep-time estimates.
    • Radiohalos in Zircon Crystals and Helium Diffusion: Polonium radiohalos and helium retention indicate rapid formation and a young Earth.
    • Shoreline and Terrestrial Erosion: Current erosion rates are inconsistent with millions of years.
    • C14 in Fossils and Soft Tissue: Detectable C14 and preserved soft tissue challenge conventional timelines.
    • Lithosphere Subduction Temperatures: Thermal models point to a much younger subduction event.
    • Magnetic Field Decay: Earth’s decaying magnetic field suggests a younger age.
    • Ocean Salinity: Current salinity levels are too low for billions of years.

    In conclusion, the geological evidence, when viewed through a biblical lens, overwhelmingly supports the reality of a global flood and a young Earth. This framework provides a coherent and compelling explanation for the Earth’s geological history, challenging the conventional uniformitarian paradigm.

  • Beyond Naturalism and Towards True Knowledge

    Beyond Naturalism and Towards True Knowledge

    The very definition of science has undergone a subtle yet significant shift. Historically, science was understood as the pursuit of knowledge, a quest to understand the world around us through observation and reason. This pursuit inherently necessitates certain presuppositions: that the universe operates with causal connections, that truth is knowable, and that we can have confidence in our ability to discern it. However, modern science has often become synonymous with methodological naturalism, a philosophy that restricts scientific inquiry to natural causes, excluding any possibility of non-natural or supernatural agency. The RationalWiki page on Methodological Naturalism introduces the concept like so:

    Methodological naturalism is the label for the required assumption of philosophical naturalism when working with the scientific method. Methodological naturalists limit their scientific research to the study of natural causes, because any attempts to define causal relationships with the supernatural are never fruitful, and result in the creation of scientific “dead ends” and God of the gaps-type hypotheses. To avoid these traps, scientists assume that all causes are empirical and naturalistic, which means they can be measured, quantified, and studied methodically.

    However, this assumption of naturalism need not extend beyond an assumption of methodology. This is what separates methodological naturalism from philosophical naturalism — the former is merely a tool and makes no truth claim, while the latter makes the philosophical — essentially atheistic — claim that only natural causes exist.

    The distinction between methodological and ontological naturalism, while often presented as this clear boundary, is, in practice, a strategic rhetorical move. Methodological naturalism purports to be a neutral, non-ontological framework for scientific inquiry. It claims to be a mere rule of engagement—that science should only investigate natural phenomena using natural explanations. Yet, in its application, it inexorably leads to ontological conclusions. By systematically excluding the possibility of non-natural causes a priori, science creates a worldview in which naturalism appears to be the only viable explanation for everything. This isn’t a discovery; it’s a foregone conclusion derived from the very rules of the game.


    The assumptions underpinning science are the most glaring example of this flawed logic. Science demands that phenomena be testable, repeatable, and observable, yet it rests on a foundation of unproven, non-empirical assumptions. We must assume logic, order, and consistency in nature—presuppositions that are not themselves testable by the scientific method. This creates a paradox: science, in its pursuit of knowledge, relies on foundational truths that are, by its own criteria, unscientific.


    This arbitrary limitation is particularly problematic when we consider the concept of agent causation. In fields like forensics, we readily distinguish between natural and volitional causes. We can conclude, based on empirical evidence, that an event was caused by an agent’s intent or will, even though that intent is not a physical object we can measure. There is already a precedent for including non-material causes in our models of reality. Science, as a system for making models that account for data, should be open to all potential causal explanations, not just those that fit within a pre-approved, naturalist box. By artificially fixing its scope to exclude supernatural causes, science pre-determines its own conclusions and, in doing so, sacrifices the pursuit of a more complete truth about reality. It becomes a system for confirming its own biases, rather than an open-ended quest for knowledge.


    Further, this limitation creates a profound epistemological problem. Consider the analogy of a painting: while analyzing the physical components of the paint and canvas can provide valuable information, it does not explain the origin or intent of the artwork. Even if we limit the inquiry to all natural processes and we found how the components could have been put together in this fashion through totally naturalistic processes, that doesn’t mean that this is the only explanation nor the most parsimonious explanation.
    Again forensics, but not just forensics, but archaeology, information theory, search for extraterrestrial intelligence (SETI), and geography. We routinely investigate both natural and non-natural causes. Embedded within these fields is the idea of agent causation, intentionality, and will. Archaeology examines artifacts to understand the cultural and intellectual agency of past civilizations. Information theory can examine material, in respect to its environment, which is high in free energy. This is usually simply described as complex and specified information. The Search for Extraterrestrial Intelligence (SETI) demonstrates that science can test for non-natural causes, such as intelligent signals from distant galaxies. Geography can also seek an understanding of how humans have impacted the natural processes and landforms of their environments through various farming and infrastructure.


    Why, then, is natural science uniquely restricted?


    The claim that science will eventually explain all phenomena through natural processes creates a logical contradiction. Methodological naturalism, by its very nature, cannot detect non-natural causes. Therefore, any conclusions drawn from this limited methodology are inherently incomplete. Scientific methodology is rooted in epistemological assumptions, and flawed assumptions lead to incomplete or inaccurate conclusions. Pragmatism, while useful, is insufficient for pursuing truth if it ignores potential causal factors.


    Counterexamples abound, highlighting that science is not always confined to strict naturalism. Studies on prayer and near-death experiences, for instance, explore non-natural influences. These examples underscore the fact that the a priori rejection of non-natural causes is a philosophical position that requires justification, especially given the prevalence of dual-causal investigations in other fields.


    From a creationist perspective, excluding supernatural processes as potential causal explanations is not only unscientific but also detrimental to the pursuit of true knowledge. The goal of science should be to determine the causes and mechanisms underlying observed phenomena, regardless of whether they are natural or involve intelligent agency. The term “supernatural” refers to causes that are not due to physical laws and chemistry, such as programming or other information input. Excluding these potential causes compromises the integrity of scientific inquiry.


    A true scientist must follow all leads and consider all possibilities to ensure that the most accurate and comprehensive model is upheld. Science is grounded in the principles of evidence-based reasoning, and the evidence may lead to non-natural or supernatural causes. If naturalism is to be a consistent and reliable methodology, it must be applied across all scientific disciplines, including forensics and historical sciences.


    In conclusion, the pursuit of knowledge should not be constrained by arbitrary philosophical limitations. By embracing a broader definition of science that includes the possibility of non-natural causes, we can move closer to a more complete and accurate understanding of the universe. This approach aligns with the creationist worldview, which recognizes the intelligent design and purpose inherent in the natural world.