Creation Questions

Author: Wesley Coleman

  • Specious Extrapolations in Origin of Species

    Specious Extrapolations in Origin of Species

    In The Origin of Species, Darwin outlines evidence against the contemporary notion of species fixity, i.e., the idea that species represent immovable boundaries. He first uses the concepts of variations alongside his introduced mechanism of natural selection to create a plausible case for not merely variations, breeds, or races of organisms, but indeed species as commonly descended. Then, in chapter 4, after introducing a taxonomic tree as a picture of biota diversification, he writes, 

    “I see no reason to limit the process of modification, as now explained, to the formation of genera alone.”

    This sentence encapsulates the theoretical move that introduced the concept of universal common ancestry as a permissible and presently accepted scientific model. There is much to discuss regarding the arguments and warrants of the modern debate; however, let us take Darwin on his own terms. In those premier paragraphs of his seminal work, was Darwin’s extrapolation merited? Do the mechanisms and the evidence put forth for them bring us to this inevitable conclusion, or perhaps is the argument yet inconclusive? In this essay, we will argue that, while Darwin’s analogical reasoning was ingenious, his reliance on uniformitarianism and nominalism may render his extrapolation less secure than it first appears.

    In order to explain this, one must first understand the logical progression Darwin must follow. There are apparently three major assumptions—or premises. These are (1) analogism–artificial selection is analogous to natural selection, (2) uniformitarianism–variation is a mostly consistent and uniform process through biological time, and (3) nominalism–all variations and, therefore, all forms, vary by degree only and not kind. Here, we use ‘nominalism’ in the sense that species categories reflect human classification rather than intrinsic natural divisions, a position Darwin implicitly adopts.

    Of his three assumptions, one shows itself to be particularly strong—that of analogism. He spends most of the first four chapters defending this premise from multiple angles. He goes into detail on the powers of artificial selection in chapter one. His detail helps us identify which particular aspect of artificial selection leads to the observed robustness and fitness within its newly delineated populations. For this, he highlights mild selection over a long time. While one can see a drastic change in quick selection, this type of selection is less sustainable. It offers a narrower range of variable options (as variations take time to emerge).

    However, even with this carefully developed premise, let us not overlook its flaws. Notice that the evidence for the power of long-term selection is said to show that it brings about more robust or larger changes within some organisms in at least some environments. However, what evidence does Darwin present to demonstrate this case?

    Darwin does not provide a formal, quantifiable, long-term experiment to demonstrate the superiority of mild, long-term selection. Instead, he relies on descriptive, historical examples from breeders’ practices and then uses a logical argument based on the nature of variation. Thus, Darwin’s appeal demonstrates plausibility, not proof. This is an important distinction if one is to treat natural selection as a mechanism of universal transformation rather than limited adaptation.

    Even still, the extrapolation of differential selection and the environment’s role in that is not egregiously contentious or strange. Moreover, perhaps surprisingly, the assumption of analogism seems to be the most mutable extrapolation. The processes which stand in more doubt are Uniformitarianism and Nominalism (which will be the issue of the rest of this essay). The assumptions of uniformitarianism and nominalism undergird Darwin’s broader inference. When formalized, they resemble the following abductive arguments:

    Argument from Persistent Variation and Selection:

    Premise 1: If the mechanisms of variation and natural selection are persistent through time, then we can infer universal common descent.

    Premise 2: The mechanisms of variation and natural selection are persistent through time.

    Conclusion: Therefore, we can infer universal common descent.

    Argument from Difference in Degree:

    Premise 1: If all life differs only by degree and not kind, then we can infer that variation is a sufficient process to create all modern forms of life.

    Premise 2: All life differs only by degree and not kind,

    Conclusion: Therefore, we infer that variation is a sufficient process to create all modern forms of life.

    From these inferential conclusions, we see the importance of the two final assumptions as a fountainhead of the stream of Darwinian theory. 

    Before moving on, a few disclaimers are in order. It is worth noting that both arguments are contingent on the assumption that biology has existed throughout long geological time scales, but that is to be put aside for now. Notice we are now implicitly granting the assumption of analogism, and this imported doctrine is, likewise, essential to any common descent arguments. Finally, it is also worth clarifying that Darwin’s repeated insistence that ‘no line of demarcation can be drawn’ between varieties and species exemplifies the nominalist premise on which this argument from degree depends.

    To test these assumptions and determine whether they are as plausible as Darwin takes them to be, we first need to examine their constituent evidence and whether they provide empirical or logical support for Darwin’s thesis.

    The uniformitarian view can be presented in several ways. For Darwin, the view was the lens through which he saw biology, based on the Principles of Geology as articulated by Charles Lyell. Overall, it is not a poor inferential standard by any means. There are, however, certain caveats that limit its relevance in any science. Essentially, the mechanism in question must be precisely known, in that what X can do is never extrapolated into what X cannot do as part of its explanatory power. 

    How Darwin frames the matter is to say, “I observe X happening at small scales, therefore X can accumulate indefinitely.” This is not inherently incorrect or poor science in and of itself. However, one might ask: if one does not know the specific mechanisms involved in this variation process, is it really plausible to extrapolate these unknown variables far into the past or the future? Without knowing how variation actually works (no Mendelian genetics, no understanding of heredity’s material basis), Darwin is in a conundrum. He cannot justify the assumption that variation is unlimited if he cannot explain what it would even mean for that proposition to be true across deep time. It is like measuring the Mississippi’s sediment deposition rate, as was done for over 170 years, and extrapolating it back in time, when the river spanned the Gulf of Mexico. Alternatively, it is like measuring the processes of water erosion along the White Cliffs of Dover and extrapolating back in time until it reaches the European continent. In the first case, there is an apparent flaw in assuming constant deposition rates. In the second case, it is evident that water alone could not have caused the original break between England and France.

    It is the latter issue that is of deep concern here. There are too many unknowns in this equation to make it remotely scientific. It is not true that observing a phenomenon consistently requires understanding its mechanisms to extrapolate. However, Darwin’s theory is historical in a way that gravity, disease, or early mechanistic explanations were not. It cannot be immediately tested. Darwin, at best, leaves us to do the bulk of the grunt work after indulging in what can only be called guesswork.

    Darwin’s second line of reasoning to reach the universal common ancestry thesis relies heavily on a philosophical view of reality: nominalism. For nominalism to be correct, all traits and features would need to be quantitatively different (longer/shorter, harder/softer, heavier/lighter, rougher/smoother) without any that are qualitatively different (light/dark, solid/liquid/gas, color/sound, circle/square). In order to determine whether biology contains quality distinctions, we must understand how and in what way kinds become differentiable.

    The best polemical examples of discrete things, which differ more than just in degree, are colors. Colors could be hard to pin down on occasion. Darwin would have an easy time, as he did with species and variation taxonomical discourse, pointing out the divisive groups of thought in the classification of colors. Intuitively, there is a straightforward flow of some red to some blue. Even if they are mostly distinguishable, is not that cloud or wash of in-betweens enough to question the whole enterprise of genuine or authentic categories?

    However, moving from blue to yellow is not just an increase or decrease in something; it is a change to an entirely new color identity. It is a new form. The perceptual experience of blue is qualitatively different from the perceptual experience of yellow. Meaning they affect the viewer in particular and different ways. Hues, specifically, are indeed highly differentiated and are clear species within the genus of color. An artist mixing blue and yellow to create green does not thereby prove that blue and yellow are not real, distinct colors—only that intermediates are possible. Likewise, it is no business of the taxonomer, which calls some species and others variations, to negate the realness of any of these separate groups and count them as arbitrary and nominal. If colors—which exist on a continuous spectrum of wavelengths—still exhibit qualitative differences, then Darwin’s assumption that ALL biological features exist only on quantitative gradients becomes questionable.

    However, Darwin has done this very thing, representing different kinds of structures with different developmental origins and functional architectures as a mere spectrum with no distinct affections or purposes. Darwin needs variation to be infinitely plastic, but what does he say to real biological constraints? Is it ever hard to tell the difference between a plant and an animal? A beak from fangs? A feather from fur? A nail from a claw? A leaf from a pine needle? What if body plans have inherent organizational logic that resists certain transformations? He is treating organisms like clay that can be molded into any form, but what if they are more like architectural structures with load-bearing walls? Darwin is missing good answers to these concerns. All of which need answers in order to call the Argument from Difference in Degree sound or convincing. 

    This critique does not diminish Darwin’s achievement in proposing a naturalistic mechanism for adaptation. Instead, it highlights the philosophical assumptions embedded in his leap from observable variation to universal common descent. Assumptions that, in 1859, lacked the mechanistic grounding that would make such extrapolation scientifically secure.

  • The Five Major Challenges To Hume’s Skepticism

    The Five Major Challenges To Hume’s Skepticism

    In David Hume’s book A Treatise of Human Nature, he constructs what he calls the science of man. One cannot rightly understand any other species of science before this foundational science. The most radical and paradigm-shifting realization, for Hume, is that if all that exists are impressions and ideas, there are no grounds to truly justify putting any two impressions together causally, no matter how we might be inclined or disposed to do so, either by vulgar habit or through any rational means. This profound insight — that impressions are singular moments of a particular feeling with no relation except that of imagination — forced philosophers (including critics such as Reid) to deeply re-evaluate theories of knowledge acquisition and general epistemic concerns.

    Reid says this in his dedication for An Inquiry into the Human Mind, “His reasoning appeared to me to be just: there was therefore a necessity to call in question the principles upon which it was founded, or to admit the conclusions.” However, there are more reasons than the mere founding principles to reject Hume’s rationale. Drawing on a recent and rigorous debate, here are the five major critiques that make me skeptical of Hume’s skeptical conclusions.

    1. Circular Reasoning (The Problem of Induction)

    Hume uses causal reasoning (observing past regularities and inferring principles about human nature) to undermine the rational basis of causal reasoning. Suppose Hume justifies the separation of cause and correlation from experience, and he uses the distinction to describe and also argue against cause-and-effect as existing outside the mind (outside a relation/idea). In that case, he is making a circular argument. The implications of this circular reasoning are profound, as it challenges the very basis of our understanding of cause and effect. If belief in necessary connection is understood apart from reason, then there is equally no reason to undermine causal reasoning. The basis for an essential connection is reason and logical deduction. Thus, we can infer it from particular impressions, or it is not, and thus we can infer it based on specific impressions. Nothing falls on his skeptical rebuttal. You cannot easily conceive of a cause without an effect, any more than a premise without a conclusion.

    2. The Self-Refutation of Assertion and Communication

    The fact that Hume is making an argument refutes his point entirely. On what grounds can Hume either 1. make a distinction between kinds of necessity or 2. place either relations or matters of fact squarely into one category? Unthinkable things are equivalent to non-existent things, according to Hume. Therefore, you cannot make claims about external reality with reference to non-existent concepts. Even concepts of the imagination must exist by virtue of real impressions that have newly associated connections. Where are the impressions for a law such as non-contradiction?

    Hume believes we cannot know a table exists, so this is not simply descriptive. His outward attempts to convince others, and the fact that he has followers who support his theory, testify against him. Psychological interpretations of reality are false simply because meaning exists apart from the mechanical goings-on of the mind, and that meaning is communicable. The very fact that Hume is articulating his theory indicates such. Even a phenomenological view is better than psychologism.

    3. The Ad Hoc Assumption of External Existence

    Hume asks for the impression that gives rise to the idea of continuation and external existence separate from our perception, but where does he get the idea of continuation and external existence in the first place? If everything is sense impressions, how is he arguing against anything contrary to sense impressions? This is all very ad hoc. Calling concepts fabrications of the imagination and such. Does he not realize that by doing so, he’s condemning his very principles, which allowed him to condemn continuation and external existence?

    4. The Active Nature of Impressions, Not Raw Data

    There is also another popular critique of Hume. That is the notion of the tree falling in the woods. The tree falls without making a sound. A sound is something that can only be heard. The point being, Hume’s impressions already imply cause-and-effect before they are even interpreted or registered. Here is another thing. If two people hear a recording of an orchestra, but one of them has finely tuned ears for orchestration while the other does not, then, on first glance, the one with finely tuned ears will hear the counter-melody played on the violin. The one that does not is not surprising. However, Hume would have to acknowledge this as an impression reflected, interpreted by relation (all of which in a near-instant), yet that implies a higher acuity has been granted to the one in the realm of a particular sense. If sense is raw data, and therefore something that you receive and not create, it stands to reason that you should not be able to improve in the tacit reception of raw data. This analogy highlights the inherent contradictions in Hume’s argument, suggesting that our senses are not passive receptors of information but active interpreters that can improve over time.

    5. The Flawed Equivalence of Conceivability and Possibility

    A rigorous philosophical objection to Hume’s conclusion on necessity centers on his premise that what is conceivable is logically possible. Hume argues that because we can conceive of a cause without its usual effect (e.g., imagining the sun not rising) without contradiction, the necessary connection is not a truth of reason, but of habit. However, this conflates a psychological possibility (what we can imagine) with a metaphysical possibility (what could actually happen in reality). Contemporary critics argue that our inability to conceive of a contradiction in a causal break may reflect our epistemic limitations —our ignorance of deep, non-obvious natural laws —rather than a statement about the world itself. Therefore, the supposed “freedom” of the imagination that underpins his skepticism is merely a function of our ignorance of actual natural necessity, and his argument fails to prove that the necessity is truly absent from the objects themselves.

  • The Agnostic Nature of Phylogenetic Trees and Nested Hierarchies

    The Agnostic Nature of Phylogenetic Trees and Nested Hierarchies

    The evidence typically presented as definitive proof for the theory of common descent, the nested hierarchy of life and genetic/trait similarities, is fundamentally agnostic. This is because evolutionary theory, in its broad explanatory power, can be adapted to account for virtually any observed biological pattern post-hoc, thereby undermining the claim that these patterns represent unique or strong predictions of common descent over alternative models, such as common design.

    I. The Problematic Nature of “Prediction” in Evolutionary Biology

    1. Strict Definition of Scientific Prediction: A true scientific prediction involves foretelling a specific, unobserved phenomenon before its discovery. It is not merely explaining an existing observation or broadly expecting a general outcome.
    2. Absence of Specific Molecular Predictions:
      • Prior to the molecular biology revolution (pre-1950s/1960s), no scientist explicitly predicted the specific molecular similarity of DNA sequences across diverse organisms, the precise double-helix structure, or the near-universal genetic code. These were empirical discoveries, not pre-existing predictions.
      • Evolutionary explanations for these molecular phenomena (e.g., the “frozen accident” hypothesis for the universal genetic code) were formulated after the observations were made, rendering them post-hoc explanations rather than predictive triumphs.
      • Interpreting broad conceptual statements from earlier evolutionary thinkers (like Darwin’s “one primordial form”) as specific molecular predictions is an act of “eisegesis”—reading meaning into the text—rather than drawing direct, testable predictions from it. A primordial form does not necessitate universal code, universal protein sequences, universal logic, or universal similarity.

    II. The Agnosticism of the Nested Hierarchy

    1. The Nested Hierarchy as an Abstract Pattern: The observation that life can be organized into a nested hierarchy (groups within groups, e.g., species within genera, genera within families) is an abstract pattern of classification. This pattern existed and was recognized (e.g., by Linnaeus) long before Darwin’s theory of common descent.
    2. Compatibility with Common Design: A designer could, for various good reasons (e.g., efficiency, aesthetic coherence, reusability of components, comprehensibility), choose to create life forms that naturally fall into a nested hierarchical arrangement. Therefore, the mere existence of this abstract pattern does not uniquely or preferentially support common descent over a common design model.
    3. Irrelevance of Molecular “Details” for this Specific Point: While specific molecular “details” (such as shared pseudogenes, endogenous retroviruses, or chromosomal fusions) are often cited as evidence for common descent, these are arguments about the mechanisms or specific content of the nested hierarchy. These are not agnostic and can be debated fruitfully. However, they do not negate the fundamental point that the abstract pattern of nestedness itself remains agnostic, as it could be produced by either common descent or common design.

    III. Evolutionary Theory’s Excessive Explanatory Flexibility (Post-Hoc Rationalization)

    1. Fallacy of Affirming the Consequent: The logical structure “If evolutionary theory (Y) is true, then observation (X) is expected” does not logically imply “If observation (X) is true, then evolutionary theory (Y) must be true,” especially if the theory is so flexible that it can explain almost any X.
    2. Capacity to Account for Contradictory or Diverse Outcomes:
      • Genetic Similarity: Evolutionary theory could equally well account for a model with no significant genetic similarity between organisms (e.g., if different biochemical pathways or environmental solutions were randomly achieved, or if genetic signals blurred too quickly over time). For example, a world with extreme porportions of horizontal gene transfer (as seen in prokaryotic and rare eukaryotic cells)
      • Phylogenetic Branching: The theory is flexible enough to account for virtually any observed phylogenetic branching pattern. If, for instance, humans were found to be more genetically aligned with pigs than with chimpanzees, evolutionary theory would simply construct a different tree and provide a new narrative of common ancestry. This flexability puts a wedge in any measure of predictability claimed by the theory.
      • “Noise” in Data: If genetic data were truly “noise” (random and unpatterned), evolutionary theory could still rationalize this by asserting that “no creator would design that way, and randomness fully accounts for it,” thus always providing an explanation regardless of the pattern. In fact, a noise pattern is perhaps one of the few patterns better explained by random physical processes. Why would a designer, who has intentionality, create in such a slapdash way?
      • Convergence vs. Divergence: The theory’s ability to explain both convergent evolution (morphological similarity without close genetic relatedness) and divergent evolution (genetic differences leading to distinct forms) should imediately signal red-flags, as this is a telltale sign of a post-hoc fitting of observations rather than a result of specific prediction.
        • To illustrate this point, Let’s imagine we have seven distinct traits (A, B, C, D, E, F, G) and five hypothetical populations of creatures (P1-P5), each possessing a unique combination of these traits. For example, P1 has {A, B, C}, P2 has {A, D, E}, P3 has {A, F, G}, P4 has {B, D, F}, and P5 has {E, G}. When examining this distribution, we can construct a plausible “evolutionary story.” Trait ‘A’, present in P1, P2, and P3, could be identified as a broadly ancestral trait. P1 might be an early branch retaining traits B and C, while P2 and P3 diversified by gaining D/E and F/G respectively.
        • However, the pattern becomes more complex with populations like P4 and P5. P4’s mix of traits {B, D, F} suggests it shares B with P1, D with P2, and F with P3. An evolutionary narrative would then employ concepts like trait loss (e.g., B being lost in P2/P3/P5’s lineage), convergent evolution (e.g., F evolving independently in P4 and P3), or complex branching patterns. Similarly, P5’s {E, G} would be explained by inheriting E from P2 and G from P3, while also undergoing significant trait loss (A, B, C, D, F).
        • And this is the crux of the argument, given any observed distribution of traits, evolutionary theory’s flexible set of explanatory mechanisms—including common ancestry, trait gain, trait loss, and convergence—can always construct a coherent historical narrative. This ability to fit diverse patterns post-hoc renders the mere existence of a nested hierarchy, disconnected from specific underlying molecular details, as agnostic evidence for common descent over other models like common design.

    IV. Challenges to Specific Evolutionary Explanations and Assumptions

    1. Conservation of the Genetic Code:
      • The claim that the genetic code must remain highly conserved post-LUCA due to “catastrophic fitness consequences” of change is an unsubstantiated assumption. Granted, it could be true, but one can imagine plausible scenarios which could demonstrate exceptions.
      • Further, evolutionary theory already postulates radical changes, including the very emergence of complex systems “from scratch” during abiogenesis. If such fundamental transformations are possible, then the notion that a “new style of codon” is impossible over billions of years, even via incremental “patches and updates,” appears inconsistent.
      • Laboratory experiments that successfully engineer organisms to incorporate unnatural amino acids demonstrate the inherent malleability of the genetic code. Yet no experiment has demonstrate abiogenesis, a much more implausible event with less evolutionary time to play with. Why limit the permissible improbable things arbitrarily?
      • There is no inherent evolutionary reason to expect a single, highly conserved “language” for the genetic code; if information can be created through evolutionary processes, then multiple distinct solutions should be the rule.
    2. Functionality of “Junk” DNA and Shared Imperfections:
      • The assertion that elements like pseudogenes and endogenous retroviruses (ERVs) are “non-functional” or “mistakes” is often an “argument from ignorance” or an “anti-God/atheism-of-the-gaps” fallacy. Much of the genome’s function is still unknown, and many supposedly “non-functional” elements are increasingly found to have regulatory or other biological roles. For instance, see my last article on the DDX11L2 “pseudo” gene which operates as a regulatory element including as a secondary promoter.
      • If these elements are functional, their homologous locations are easily explained by a common design model, where a designer reuses functional components across different creations.
      • The “functionality” of ERVs, for instance, is often downplayed in arguments for common descent, despite their known roles in embryonic development, antiviral defense, and regulation, thereby subtly shifting the goalposts of the argument.
    3. Probabilities of Gene Duplication and Fusion:
      • The probability assigned to beneficial gene duplications and fusions (which are crucial for creating new genetic information and structures) seems inconsistently high when compared to the low probability assigned to the evolution of new codon styles. If random copying errors can create functional whole genes or fusions, then the “impossibility” of a new codon style seems a little arbitrary.

    Conclusion:

    The overarching argument is that while common descent can certainly explain the observed patterns in biology, its explanatory power often relies on post-hoc rationalization and a flexibility that allows it to account for almost any outcome. This diminishes the distinctiveness and predictive strength of the evidence, leaving it ultimately agnostic when compared to alternative models that can also account for the same observations through different underlying mechanisms.

  • Evidence for an Active Alternative Promoter in the Human DDX11L2 Gene

    Evidence for an Active Alternative Promoter in the Human DDX11L2 Gene

    Abstract

    The human genome contains numerous regulatory elements that control gene expression, including canonical and alternative promoters. While DDX11L2 is annotated as a pseudogene, its functional relevance in gene regulation has been a subject of interest. This study leverages publicly available genomic data from the UCSC Genome Browser, integrating information from the ENCODE project and ReMap database, to investigate the transcriptional activity within a specific intronic region of the DDX11L2 gene (chr2:113599028-113603778, hg38 assembly). Our analysis reveals the co-localization of key epigenetic marks, candidate cis-regulatory elements (cCREs), and RNA Polymerase II binding, providing robust evidence for an active alternative promoter within this region. These findings underscore the complex regulatory landscape of the human genome, even within annotated pseudogenes.

    1. Introduction

    Gene expression is a tightly regulated process essential for cellular function, development, and disease. A critical step in gene expression is transcription initiation, primarily mediated by RNA Polymerase II (Pol II) in eukaryotes. Transcription initiation typically occurs at promoter regions, which are DNA sequences located upstream of a gene’s coding sequence. However, a growing body of evidence indicates the widespread use of alternative promoters, which can initiate transcription from different genomic locations within or outside of a gene’s canonical promoter, leading to diverse transcript isoforms and complex regulatory patterns [1].

    The DDX11L2 gene, located on human chromosome 2, is annotated as a DEAD/H-box helicase 11 like 2 pseudogene. Pseudogenes are generally considered non-functional copies of protein-coding genes that have accumulated mutations preventing their translation into functional proteins. Despite this annotation, some pseudogenes have been found to play active regulatory roles, for instance, by producing non-coding RNAs or acting as cis-regulatory elements [2]. Previous research has suggested the presence of an active promoter within an intronic region of DDX11L2, often discussed in the context of human chromosome evolution [3].

    This study aims to independently verify the transcriptional activity of this specific intronic region of DDX11L2 by analyzing comprehensive genomic and epigenomic datasets available through the UCSC Genome Browser. We specifically investigate the presence of key epigenetic hallmarks of active promoters, the classification of cis-regulatory elements, and direct evidence of RNA Polymerase II binding.

    2. Materials and Methods

    2.1 Data Sources

    Genomic and epigenomic data were accessed and visualized using the UCSC Genome Browser (genome.ucsc.edu), utilizing the Human Genome assembly hg38. The analysis focused on the genomic coordinates chr2:113599028-113603778, encompassing the DDX11L2 gene locus.

    The following data tracks were enabled and examined in detail:

    ENCODE Candidate cis-Regulatory Elements (cCREs): This track integrates data from multiple ENCODE assays to classify genomic regions based on their regulatory potential. The “full” display mode was selected to visualize the color-coded classifications (red for promoter-like, yellow for enhancer-like, blue for CTCF-bound) [4].

    Layered H3K27ac: This track displays ChIP-seq signal for Histone H3 Lysine 27 acetylation, a histone modification associated with active promoters and enhancers. The “full” display mode was used to visualize peak enrichment [5].

    ReMap Atlas of Regulatory Regions (RNA Polymerase II ChIP-seq): This track provides a meta-analysis of transcription factor binding sites from numerous ChIP-seq experiments. The “full” display mode was selected, and the sub-track specifically for “Pol2” (RNA Polymerase II) was enabled to visualize its binding profiles [6].

    DNase I Hypersensitivity Clusters: This track indicates regions of open chromatin, which are accessible to regulatory proteins. The “full” display mode was used to observe DNase I hypersensitive sites [4].

    GENCODE Genes and RefSeq Genes: These tracks were used to visualize the annotated gene structure of DDX11L2, including exons and introns.

    2.2 Data Analysis

    The analysis involved visual inspection of the co-localization of signals across the enabled tracks within the DDX11L2 gene region. Specific attention was paid to the first major intron, where previous studies have suggested an alternative promoter. The presence and overlap of red “Promoter-like” cCREs, H3K27ac peaks, and Pol2 binding peaks were assessed as indicators of active transcriptional initiation. The names associated with the cCREs (e.g., GSE# for GEO accession, transcription factor, and cell line) were noted to understand the experimental context of their classification.

    3. Results

    Analysis of the DDX11L2 gene locus on chr2 (hg38) revealed consistent evidence supporting the presence of an active alternative promoter within its first intron.

    3.1 Identification of Promoter-like cis-Regulatory Elements:

    The ENCODE cCREs track displayed multiple distinct red bars within the first major intron of DDX11L2, specifically localized around chr2:113,601,200 – 113,601,500. These red cCREs are computationally classified as “Promoter-like,” indicating a high likelihood of promoter activity based on integrated epigenomic data. Individual cCREs were associated with specific experimental identifiers, such as “GSE46237.TERF2.WI-38VA13,” “GSE102884.SMC3.HeLa-Kyoto_WAPL_PDS-depleted,” and “GSE102884.SMC3.HeLa-Kyoto_PDS5-depleted.” These labels indicate that the “promoter-like” classification for these regions was supported by ChIP-seq experiments targeting transcription factors like TERF2 and SMC3 in various cell lines (WI-38VA13, HeLa-Kyoto, and HeLa-Kyoto under specific depletion conditions).

    3.2 Enrichment of Active Promoter Histone Marks:

    A prominent peak of H3K27ac enrichment was observed in the Layered H3K27ac track. This peak directly overlapped with the cluster of red “Promoter-like” cCREs, spanning approximately chr2:113,601,200 – 113,601,700. This strong H3K27ac signal is a hallmark of active regulatory elements, including promoters.

    3.3 Direct RNA Polymerase II Binding:

    Crucially, the ReMap Atlas of Regulatory Regions track, specifically the sub-track for RNA Polymerase II (Pol2) ChIP-seq, exhibited a distinct peak that spatially coincided with both the H3K27ac enrichment and the “Promoter-like” cCREs in the DDX11L2 first intron. This direct binding of Pol2 is a definitive indicator of transcriptional machinery engagement at this site.

    3.4 Open Chromatin State:

    The presence of active histone marks and Pol2 binding strongly implies an open chromatin configuration. Examination of the DNase I Hypersensitivity Clusters track reveals a corresponding peak, further supporting the accessibility of this region for transcription factor binding and initiation.

    4. Discussion

    The integrated genomic data from the UCSC Genome Browser provides compelling evidence for an active alternative promoter within the first intron of the human DDX11L2 gene. The co-localization of “Promoter-like” cCREs, robust H3K27ac signals, and direct RNA Polymerase II binding collectively demonstrates that this region is actively engaged in transcriptional initiation.

    The classification of cCREs as “promoter-like” (red bars) is based on a sophisticated integration of multiple ENCODE assays, reflecting a comprehensive biochemical signature of active promoters. The specific experimental identifiers associated with these cCREs (e.g., ERG, TERF2, SMC3 ChIP-seq data) highlight the diverse array of transcription factors that can bind to and contribute to the regulatory activity of a promoter. While ERG, TERF2, and SMC3 are not RNA Pol II itself, their presence at this locus, in conjunction with Pol II binding and active histone marks, indicates a complex regulatory network orchestrating transcription from this alternative promoter.

    The strong H3K27ac peak serves as a critical epigenetic signature, reinforcing the active state of this promoter. H3K27ac marks regions of open chromatin that are poised for, or actively undergoing, transcription. Its direct overlap with Pol II binding further strengthens the assertion of active transcription initiation.

    The direct observation of RNA Polymerase II binding is the most definitive evidence for transcriptional initiation. Pol II is the core enzyme responsible for synthesizing messenger RNA (mRNA) and many non-coding RNAs. Its presence at a specific genomic location signifies that the cellular machinery for transcription is assembled and active at that site.

    The findings are particularly interesting given that DDX11L2 is annotated as a pseudogene. This study adds to the growing body of literature demonstrating that pseudogenes, traditionally considered genomic “fossils,” can acquire or retain functional regulatory roles, including acting as active promoters for non-coding RNAs or influencing the expression of neighboring genes [2]. The presence of an active alternative promoter within DDX11L2 suggests a more intricate regulatory landscape than implied by its pseudogene annotation alone.

    5. Conclusion

    Through the integrated analysis of ENCODE and ReMap data on the UCSC Genome Browser, this study provides strong evidence that an intronic region within the human DDX11L2 gene functions as an active alternative promoter. The co-localization of “Promoter-like” cCREs, high H3K27ac enrichment, and direct RNA Polymerase II binding collectively confirms active transcriptional initiation at this locus. These findings contribute to our understanding of the complex regulatory architecture of the human genome and highlight the functional potential of regions, such as pseudogenes, that may have been previously overlooked.

    References

    [1] Carninci P. and Tagami H. (2014). The FANTOM5 project and its implications for mammalian biology. F1000Prime Reports, 6: 104.

    [2] Poliseno L. (2015). Pseudogenes: Architects of complexity in gene regulation. Current Opinion in Genetics & Development, 31: 79-84.

    [3] Tomkins J.P. (2013). Alleged Human Chromosome 2 “Fusion Site” Encodes an Active DNA Binding Domain Inside a Complex and Highly Expressed Gene—Negating Fusion. Answers Research Journal, 6: 367–375. (Note: While this paper was a starting point, the current analysis uses independent data for verification).

    [4] ENCODE Project Consortium. (2012). An integrated encyclopedia of DNA elements in the human genome. Nature, 489(7414): 57–74.

    [5] Rada-Iglesias A., et al. (2011). A unique chromatin signature identifies active enhancers and genes in human embryonic stem cells. Nature Cell Biology, 13(9): 1003–1013.

    [6] Chèneby J., et al. (2018). ReMap 2018: an updated atlas of regulatory regions from an integrative analysis of DNA-binding ChIP-seq experiments. Nucleic Acids Research, 46(D1): D267–D275.

  • J. Budziszewski’s Natural Theology of Sex: A Pathway to Biblical Understanding

    J. Budziszewski’s Natural Theology of Sex: A Pathway to Biblical Understanding

    J. Budziszewski, in his insightful work On the Meaning of Sex, presents a compelling natural theological framework that grounds sexual ethics in the inherent design and purpose of human beings. This approach, by meticulously analyzing the given structure of human nature, offers a robust pathway that can successfully lead to a Biblical understanding of sexuality and gender. Budziszewski argues that meaning is not arbitrarily assigned but is discovered through the inherent design of creation, and it is this foundational concept that shapes his comprehensive view of sexual morality.

    A) The Foundational Idea: Inherent Design and Purpose

    The bedrock of Budziszewski’s philosophy, especially concerning the questions of sexuality and gender, is the conviction that meaning is intrinsic to reality, particularly to human nature itself. He firmly asserts, “Meaning isn’t arbitrary. Yes, we can associate sex in our minds with anything we choose—with pain, pleasure, tedium, amusement, alienation, reconciliation, fertility, sterility, misery, joy, life, death, or what have you. This is true of all things, not just sex. We can associate anything with anything” (7). However, he immediately clarifies that subjective association does not alter objective meaning. For Budziszewski, human nature is not an external master but “the deep structure of what we really are” (8). True freedom, then, is not the ability to transcend this nature, but rather the ability to align our wills with it, to allow “the meanings and purposes that lie fallow in sexuality [to] unfold” (8). He explains that the human will is not separate from nature but an integral part of it, asserting that the will’s nobility lies in its capacity to discern and direct itself according to the inherent wisdom embedded in our being.

    Budziszewski confronts common objections to this idea, particularly the notion that one cannot derive an “ought” from an “is.” He dismantles this dogma by using simple, yet powerful, examples. When discussing the lungs, he posits, “When we say that their purpose is to oxygenate the blood, are we just making that up? Of course not. The purpose of oxygenation isn’t in the eye of the beholder; it’s in the design of the lungs themselves” (22). This emphasis on “the design of the lungs” is crucial; it implies that purpose is empirically discoverable. Furthermore, he contends that to violate this inherent design, such as by “sniffing glue,” does not change the lung’s purpose but only “violates it” (22). Similarly, regarding eyes, he argues, “If the purpose of eyes is to see, then eyes that see well are good eyes, and eyes that see poorly are poor ones. Given their purpose, this is what it means for eyes to be good. Moreover, good is to be pursued; the appropriateness of pursuing it is what it means for anything to be good. Therefore, the appropriate thing to do with poor eyes is try to turn them into good ones” (22). This demonstrates that understanding a thing’s inherent purpose necessarily implies an “ought”—an imperative to act in accordance with that purpose. He further distinguishes “purpose” from mere “function,” stating that purpose signifies something “ordered or directed to an end,” whereas function merely “signifies the mode in which purpose is present in things rather than in minds” (23). This foundational idea underpins his entire argument: that human beings, as integrated wholes of “mind and flesh united,” must respect the inherent design of their bodies, including their sexuality (23). While he acknowledges that some might dismiss his work as “religious” due to references to “God,” he insists that divine grace, if real, is “inescapably relevant to human life” and can be understood even through natural reasoning (11).

    B) Application to Gender and Sexuality

    Applying this foundational idea, Budziszewski posits that human sexuality possesses “embedded principles and the inbuilt meaning of the human sexual design” (21). He laments that “errors about sex cause such terrible suffering, in our day more than most” (12), and attributes this suffering to the flouting of these inherent meanings. He identifies two fundamental “natural meanings” of sex that are “so tightly stitched that we can start with either one and follow the threads to the other” (24): procreation and union.

    First, regarding procreation, Budziszewski asserts that it is the “bring about and nurture of new life, the formation of families in which children have moms and dads” (24). He outlines two conditions for establishing something’s purpose: it must actually bring about the effect, and the causal connection must explain its existence. Sexuality undeniably meets both: “the sexual powers do bring about procreation,” and “apart from the link between the sexual powers and new life, any explanation of why we have sexual powers at all would be woefully incomplete” (25). This procreative meaning, in turn, necessitates the concept of union. He argues, “For us, procreation requires an enduring partnership between two beings, the man and the woman, who are different, but in ways that enable them to complete and balance each other. Union, then, characterizes the distinctly human mode of procreation” (25). This enduring partnership between a man and a woman is essential not only for conception but also for the raising of children, as “the male is better suited to protection, the female to nurture” (26). Children also need models of both sexes and the relationship between them to thrive and eventually form their own families. He even cites sociologists Sara S. McLanahan and Gary Sandefur, who suggest that “If we were asked to design a system for making sure that children’s basic needs were met, we would probably come up with something quite similar to the two-parent ideal” (26).

    Conversely, Budziszewski demonstrates how starting with the unitive meaning also leads back to procreation. He states, “We join ourselves by doing what? By an act which is intrinsically open to the possibility of new life. In other words, whenever I give myself sexually, I am doing something that cannot help but mean that happy chance” (27). This implies that a true, total self-giving in union de facto means a bodily giving, which inherently carries the possibility of new life. He powerfully illustrates this with the concept of the body’s objective “speech”: “What you intend subjectively can’t change what your act means objectively…When the speech of the mouth contradicts the speech of the body, the body’s speech repeals the mouth’s. To crush your windpipe with my thumbs is to say to you, ‘Now die,’ even if I tell you with my mouth, ‘Be alive’” (27). Sexual union, therefore, objectively “speaks” of total, self-giving, life-affirming communion, regardless of subjective intent. By the end of this analysis, Budziszewski concludes that these are “the natural laws of sex” (33).

    C) Evaluation and Connection to Biblical Understanding

    Budziszewski’s position is remarkably helpful and coherent in discussing gender and sexuality, particularly as it provides a clear pathway to understanding these concepts from a Biblical perspective. His natural law approach, by grounding sexual ethics in discernible human design and purpose, offers a rational basis for moral norms that is not solely reliant on religious dogma, even as it ultimately aligns with it. He addresses the widespread confusion of our age, where “everything is topsy-turvy and confused,” by reminding us that “It is harder to write about what is obvious but unrecognized than about what is really obscure” (15). His method makes the “obvious” — the inherent meaning of sex — recognizable again.

    The direct alignment between Budziszewski’s “natural laws of sex” and Biblical principles is striking. The procreative meaning he identifies, “the bring about and nurture of new life, the formation of families in which children have moms and dads,” finds a direct echo in the Genesis mandate, “Be fruitful and multiply” (Genesis 1:28). This divine command is not an arbitrary rule but an affirmation of the inherent design for flourishing that God embedded within creation, particularly in human sexual powers. The natural purpose of bringing forth new life and fostering it within the structure of a family led by a mother and a father is, for Budziszewski, a self-evident truth discoverable through observation, much like the purpose of lungs or eyes.

    Similarly, his unitive meaning of sex—the “mutual and total self-giving and accepting of two polar, complementary selves in their entirety, soul and body”—is perfectly mirrored in the Biblical concept of “one flesh” (Genesis 2:24; Matthew 19:5-6). This Biblical phrase signifies not merely physical intimacy but a profound, holistic union of two distinct yet complementary individuals (male and female) into a new relational entity. Budziszewski’s argument that sexual union is “intrinsically open to the possibility of new life” and that subjective intent cannot override the objective “speech” of the body powerfully reinforces the sanctity and seriousness of the one-flesh union as depicted in scripture. The Bible’s understanding of marriage as the exclusive context for sexual intimacy, and the procreative blessing associated with it, finds a rational foundation in Budziszewski’s natural law deductions. His framework thus serves as a potent apologetic, demonstrating that the Biblical understanding of sexuality is not a set of arbitrary prohibitions but rather a reflection of the deepest truths embedded in human nature by its Creator.

    In conclusion, J. Budziszewski’s approach to natural theology in On the Meaning of Sex provides an exceptionally valuable framework for understanding sexuality and gender. By firmly grounding his arguments in the inherent design and purpose of human nature, he navigates complex ethical terrain with clarity and precision. His articulation of sex’s natural meanings—procreation and union—is not only philosophically robust but also demonstrably converges with the ethical insights found in Biblical teachings. In a world often characterized by confusion and suffering regarding sexual identity and behavior, Budziszewski’s work offers a compelling and coherent pathway to rediscovering meaning, leading ultimately to a fuller appreciation of sexuality and gender as they are divinely designed and revealed.

    Works Cited

    Budziszewski, J. On the Meaning of Sex. InterVarsity Press, 2012.The New American Standard Bible, 1995.

  • The Pagan Can Be Saved?

    The Pagan Can Be Saved?

    Wesley Coleman

    In Søren Kierkegaard’s Concluding Unscientific Postscript to Philosophical Fragments, Johannes Climacus breaks down notions, based on objective and speculative interpretations, of Christianity, arguing instead that authentic religious truth is fundamentally subjective. As exemplified in his assertion on page 201 regarding truth in prayer, Climacus posits that the manner of an individual’s infinite, passionate relation to the eternal—even in the face of objective uncertainty or perceived untruth—is paramount, superseding intellectual assent to dogma or historical fact and revealing the inherent limitations of any detached, disinterested approach to faith. This stance foregrounds the lived reality of faith as a personal, strenuous endeavor, fundamentally separate from and perhaps at odds with objective inquiry.

    Kierkegaard, through Climacus, opens the Postscript by challenging what he identifies as problematic approaches to understanding Christianity: the historical, the speculative, and the superficial religiousness prevalent in his time. From the very start, Kierkegaard has separated the objective issue of the truth of Christianity from the subjective issue of the subjective individual’s relation to the truth of Christianity (Kierkegaard 22). Climacus contends that the objective point of view, whether focusing on historical or philosophical truth, is inherently flawed when applied to Christianity. An objective inquiry is characterized as “disinterested,” seeking to establish truth through critical consideration of reports or the relation of doctrine to eternal truth. However, for an individual concerned with their eternal happiness, historical certainty, being merely an “approximation,” is profoundly insufficient. This is because “an approximation is too little to build his happiness on and is so unlike an eternal happiness that no result can ensue” (Kierkegaard 22). The scholarly pursuit, while commendable in its erudition, ultimately “distracts” from the issue of an individual’s faith (Kierkegaard 14) and “suppresses” the vital dialectical clarity required for true understanding (Kierkegaard 11).

    The fundamental problem with objectivity, as Climacus elaborates, is its inherent detachment from the individual’s existence. The “objective subject” is too “modest” and “immodest” to include himself in the inquiry; he is interested but “not infinitely, personally, impassionedly interested in his relation to this truth concerning his own eternal happiness” (Kierkegaard 22). This detachment leads to a comical self-deception: “Precisely this is the basis of the scholar’s elevated calm and the parroter’s comical thoughtlessness” (Kierkegaard 22). Christianity, Climacus asserts, is spirit; spirit is inwardness; inwardness is subjectivity; subjectivity is essentially passion, and at its maximum an infinite, personally interested passion for one’s eternal happiness. Therefore, as soon as subjectivity is taken away, and passion from subjectivity, and infinite interest from passion, there is no decision whatsoever. The objective approach, by sacrificing this infinite, personal, impassioned interestedness, paradoxically makes one too objective to have eternal happiness. The speculative point of view fares no better, attempting to permeate Christianity with thought and and make it eternal thought. Yet, if Christianity is truly subjectivity, a matter of inward deepening, then objective indifference cannot come to know anything whatsoever. Like is understood only by like; thus, the knower must be in the requisite state of infinite, passionate interest. Speculative thought, in its objectivity, is “totally indifferent to his and my and your eternal happiness” (Kierkegaard 55), making its “happiness” an illusion as it attempts to be “exclusively eternal within time” (Kierkegaard 56).

    This critique of objective and speculative approaches, which Climacus gradually unfolds finally builds to a climax on page 201 with the passage at hand to be dealt with. The chapter titled “Subjective Truth, Inwardness; Truth Is Subjectivity” in Part Two directly introduces the core concept that “truth becomes appropriation, inwardness, subjectivity, and the point is to immerse oneself, existing, in subjectivity” (Kierkegaard 192). Climacus establishes that for an existing person, “the question about truth persists” not as an abstract definition, but as something to “exist in” (Kierkegaard 191). He dismisses mediation and the abstract “subject-object” as reverting to abstraction (Kierkegaard 192), emphasizing that “an existing person cannot be in two places at the same time, cannot be subject-object” (Kierkegaard 199). The “I-I” is explicitly called a “mathematical point that does not exist at all” (Kierkegaard 197), making it clear, for Climacus, that it is an impossibility for an existing human being to transcend their individual, passionate existence and achieve this abstract oneness. For Climacus, “only ethical and ethical-religious knowing is essential knowing” (Kierkegaard 198), and such knowing is always essentially related to the knower’s own existence.

    The critical distinction, immediately preceding the paragraph in question, is articulated as: “When the question about truth is asked objectively, truth is reflected upon objectively as an object to which the knower relates himself…When the question about truth is asked subjectively, the individual’s relation is reflected upon subjectively. If only the how of this relation is in truth, the individual is in truth, even if he in this way were to relate himself to untruth” (Kierkegaard 199). This prioritizes the mode of relation over the object of relation in its abstracted form separate from engagement.

    Then, the force of Climacus’s argument is finally catalyzed. He starts with an aggressive remark, “now, if the problem is to calculate where there is more truth…then there can be no doubt about the answer for anyone who is not totally botched by scholarship and science” (Kierkegaard 201). The harsh remark is true, it is intuitive for all those not steeped in abstraction. Those who are incapable of grasping the truth are those which have been immersed in a harmful way of thinking, and Climacus’s words are meant to provoke that truth. The phrase “botched by scholarship and science” in particular is reminiscent of the “infinite, personal, impassioned interestedness” which exists in the person practicing the objective issue (Kierkegaard 27).

    Climacus then explicitly rules out any notion of a neutral, balanced approach: “(and, as stated, simultaneously to be on both sides equally is not granted to an existing person but is only a beatifying delusion for a deluded I-I)” (Kierkegaard 201). This re-emphasizes that an existing human being cannot inhabit the abstract “subject-object” or “I-I,” which is a phantom of pure thought (Kierkegaard 192). To attempt such a mediation between objective and subjective approaches is a “delusion,” a fantastical escape from the concrete reality of existing. An existing person is always in a process of becoming (Kierkegaard 192), and this inherent motion precludes the static, all-encompassing view of the “I-I” (Kierkegaard 199).

    The core of the paragraph is the deep dichotomy presented: “whether on the side of the person who only objectively seeks the true God and the approximating truth of the God-idea or on the side of the person who is infinitely concerned that he in truth relate himself to God with the infinite passion of need” (Kierkegaard 201). The dichotomy is on one hand, “the true God” and “approximating truth of the God-idea” and on the other, “infinite passion of need.” The objective seeker remains stuck in approximate knowledge, which, as established earlier, is insufficient for eternal happiness. In contrast, the “infinite passion of need” signifies the highest subjectivity, where the individual’s “eternal happiness” is at stake. This passion brings true existential importance to the individual which is impossible through speculation.

    The paragraph then presents a provocative thought experiment: “If someone who lives in the midst of Christianity enters, with knowledge of the true idea of God, the house of God, the house of the true God, and prays, but prays in untruth, and if someone lives in an idolatrous land but prays with all the passion of infinity, although his eyes are resting upon the image of an idol—where, then, is there more truth?” (Kierkegaard 201). This scenario is incredibly hard for many who view Christianity as something true that one believes about God. This analogy turns that presumption on its head drawing a distinction between the “what” and the “how” of faith (Kierkegaard 199). The person who is a Christian by birth or culture or even intellectually “knows the true idea of God” and prays in the “house of the true God” (Kierkegaard 201) represents the objective approach that assumes faith is an afterthought and something that can be taken for granted. Such an individual may possess all the outward forms and correct doctrines, but their prayer is “in untruth” if it lacks the “infinite passion of inwardness” (Kierkegaard 201). This coincides with Climacus’s earlier assertion that objective Christianity is pagan (Kierkegaard 43), and to know a creed by rote is paganism, because Christianity is inwardness. Their knowledge, being disinterested, is merely a vanishing, unrecognizable atom of objective understanding, not transformative truth.

    Conversely, the individual in an “idolatrous land” who prays “with all the passion of infinity” to an idol, despite the objective untruth of the object, possesses “more truth” (Kierkegaard 201). The passion itself, the subjective “how” of their relation, is the determining factor. This is because the passion of the infinite is the very truth. Their worship, even of an objectively false god, carries the weight of authentic, boundless engagement.

    The conclusion of the paragraph drives the point home: “The one prays in truth to God although he is worshiping an idol; the other prays in untruth to the true God and is therefore in truth worshiping an idol” (Kierkegaard 201). This is not a relativistic dismissal of God’s objective existence, but a radical redefinition of what constitutes truth in the context of an individual’s religious life. The person who prays passionately to an idol is, in their inwardness, genuinely seeking the divine, and this “infinite passion of need” (Kierkegaard 201) creates a true “God-relation” (Kierkegaard 199). Their relation, despite the objective error, is in truth. This is, perhaps, a shocking revelation to the one who calls the heretic ‘unsaved’. Conversely, the person who prays to the true God without this infinite passion effectively turns the true God into an “idol”—an object of detached, intellectual assent rather than a living, transforming presence. This intellectual understanding without passionate inwardness is merely an illusion. It reduces the divine to an object for intellectual scrutiny, precisely what objective thought does to Christianity (Kierkegaard 52).

    Other possible interpretations of this passage, primarily objective or speculative, fail to grasp its radical thrust. An objective interpretation would likely focus on the factual untruth of idol worship, concluding that the idolater is in untruth regardless of their passion. This perspective, however, completely misses Climacus’s central argument that objective knowledge is “indifferent” to the knower’s existence and thus cannot engage with the truth of the infinite (Kierkegaard 193). For an objective approach, the truth is merely “an object to which the knower relates himself” (Kierkegaard 199), failing to recognize that “the individual’s relation is reflected upon subjectively” and the “how” is truth (Kierkegaard 199). This kind of detached, “disinterested” knowledge simply “distracts” from the issue of faith (Kierkegaard 28).

    A speculative interpretation might attempt to mediate between the two positions, arguing that the true understanding lies in a higher synthesis where both the object and the subjective relation are reconciled. However, Climacus explicitly rejects such mediation for an existing person, stating that to be in mediation is to be finished; to exist is to become. Speculative thought, in its quest for a “system” (Kierkegaard 14), “promises everything and keeps nothing at all” for the existing individual. It assumes a “presuppositionless” beginning and ultimately “dissolves into a make-believe” of understanding faith (Kierkegaard 14). By attempting to “explain and annul” the paradox, speculative thought implicitly “corrects” Christianity instead of explaining it. The absolute paradox, which is the eternal truth coming into existence in time, cannot be understood but only believed “against the understanding” (Kierkegaard 217). Any attempt to rationally encompass or explain it is “volatilization” and a return to paganism (Kierkegaard 217). The speculative thinker, in trying to become “objective” and “disappear from himself” (Kierkegaard 56), cannot grasp the existential truth of faith, which is grounded in passion and the “utmost exertion” of the existing self (Kierkegaard 55).

    Furthermore, the interpretation that reduces Christianity to a set of doctrines or a historical phenomenon, implicitly adopted by the “Christian in the midst of Christianity” who prays “in untruth” (Kierkegaard 201), is also rejected. Christianity is not a doctrine but a relational act. The relation to a doctrine is merely intellectual, whereas the relation to Christianity is one of faith, an infinite interestedness. To be a Christian by name only is a serious danger due to the fact that it removes the necessary “infinite passion” (Kierkegaard 16). Such individuals, by “praying in untruth” (Kierkegaard 201), effectively transform the true God into an “idol” (Kierkegaard 201), stripped of the demanding, transformative power that calls for infinite inwardness.

    In conclusion, the paragraph on page 201 profoundly encapsulates Climacus’s core thesis: Christianity’s truth is existentially actualized not through objective knowledge or speculative comprehension, but through the subjective individual’s absolute, infinite passion. This passion, born of an “infinite need” and held fast against “objective uncertainty” (Kierkegaard 203), is the very essence of faith, a “contradiction between the infinite passion of inwardness and the objective uncertainty” (Kierkegaard 204). The example of the passionate idolater versus the dispassionate Christian reveals that the intensity and truthfulness of the subjective relation far outweighs the objective accuracy of the object of worship when it comes to genuine religiousness. This radical emphasis on the “how” of faith over the “what” forces the reader to confront the demanding, terrifying, and deeply personal nature of becoming and being a Christian, a path that rejects the easy and fragmentary reassurances of objective certainty and speculative systems in favor of a lived, passionate existence with a holistic commitment. The radical conclusion that one can have objective error and be in real relationship with God. The radical conclusion that the pagan can be saved. Not because their idol is the true God, but because they have true faith.

    Climacus, Johannes. Concluding Unscientific Postscript to Philosophical Fragments. Edited and translated by Howard V. Hong and Edna H. Hong, Princeton UP, 1992.

  • The Nature of Society: Where We Stand as Individuals

    The Nature of Society: Where We Stand as Individuals

    From my perspective, society isn’t some grand, top-down invention or a purely artificial construct. Instead, it’s a natural outgrowth of human interaction, an organic creation. This organic origin gives society a fascinating, dualistic nature: it’s both a source of conflict and a fertile ground for cooperation, a necessary evil, and a crucial tool for individual flourishing. I see these seemingly opposing ideas not as separate or contradictory, but as deeply intertwined.

    The inherent conflict within society comes from the undeniable reality of human imperfection. As fallen creatures, individuals will always have competing interests, differing desires, and a natural lean toward self-interest and corruption. This doesn’t mean we’re in a constant state of overt warfare, but rather a perpetual tension over resources, values, and the direction we take as a collective. Yet, our natural inclination to interact also fosters cooperation. Things like specialization, security, the pursuit of knowledge, and companionship make a collective invaluable. Society, then, emerges from this very tension—the delicate balance between individual will and collective order.


    Our Place as Individuals in the Social Fabric

    An individual’s relationship to society is equally nuanced. In my view, the paramount command for each of us is to love our neighbor and orient our lives toward God. This core Christian ethical responsibility dictates an outward-looking concern for others, yet it critically anchors responsibility within our own sphere of influence. While the collective good is undeniably important and should be prioritized when we can genuinely affect change, our ultimate responsibility isn’t to the totality of society—what Dostoevsky called ‘general love of humanity’—but for what we can directly control: the self.

    This means cultivating personal virtue, making ethical choices in daily interactions, and contributing positively within our own communities. Society, in turn, has a duty to its members, but this duty is reciprocal. It flows from the recognition that individuals have responsibilities toward each other. It’s not a top-down benevolence, but a framework of mutual obligation.


    Understanding Freedom and Authority

    Freedom, in this context, isn’t absolute license. All freedom is either freedom from or freedom to. We should possess freedom from things that cause harm—whether it’s physical violence, coercive manipulation, or the unjust suppression of conscience. Equally, we should have the freedom to choose things that benefit us, to pursue our vocations, and to act on good impulses. Crucially, to exercise these freedoms, we must also be free to express our perceptions about what’s beneficial and harmful, and to act on the former while restricting the latter.

    Broader, or higher, societal authority should be clearly codified into law, discriminating against no one group. These laws should ideally be general rules of conduct, equally applicable to all, providing a predictable framework for individual action rather than dictating specific outcomes.

    This idea comes from a fundamental principle of governance, which I derive from thinkers like Hayek and Mill: broader authority—the state or collective institutions—should err on the side of fewer restrictions and regulations. Its role is to establish and enforce the rules of the game, not to direct the play itself. Conversely, narrower authority, extending to its most narrow point in the self, should err on the side of being too restricted. This means exercising personal moral discipline and self-governance.

    This plays out with a clear distinction: the king declares that murder is forbidden, establishing a universal legal boundary, while the individual forbids hate in his own heart, engaging in the continuous, internal struggle for virtue. The former creates external order; the latter cultivates internal righteousness. The moment this moral hierarchy is dismembered is likely the same moment society begins to decline.


    The Unending Struggle

    Human beings are fallen creatures, and none of this will ever play out as a utopian vision. We’re not so malleable, in a Marxist sense, that our nature can be entirely shaped by policy or environmental conditions; there are inherent tendencies and proclivities that resist perfect social engineering. Nor are humans so inherently good that they don’t tend toward corruption when power is consolidated or accountability is removed. While humans are capable of immense wonders, they are equally capable of great atrocities. It’s not wrong to call humanity bad in its fallen state, but to call us irredeemable would be antithetical to the Christian ethos that informs my worldview.

    The telos of man, our ultimate purpose, is to obey God’s commands. Ideally, institutions should facilitate that process, creating an environment conducive to moral flourishing. However, due to human imperfection and the inherent limitations of collective structures, institutions are, perhaps, not capable of reaching that ideal state in their earthly manifestation.

    In many ways, I identify strongly with Friedrich Hayek’s arguments in The Road to Serfdom. His critique of collectivist policies and central planning resonates with my understanding of human nature and the necessary boundaries of societal authority. Hayek meticulously demonstrates how attempts to centrally plan society toward specific, desirable ends, even with the best intentions, inevitably lead to a loss of individual liberty and an escalation of coercive power and totalitarianism. I maintain a tentative rule-of-law position while I wait for the Lawmaker.

    Further Reading:

    • Dostoevsky, Fyodor. The Brothers Karamazov
    • Hayek, F. A. The Road to Serfdom
    • Marx, Karl, and Engels, Friedrich. The Communist Manifesto
    • Mill, John Stuart. On Liberty
  • The Incoherence of Naturalism

    The Incoherence of Naturalism

    Introduction

    Naturalism—the philosophical position that reality consists entirely of natural entities governed by natural laws—presents itself as the most rational and empirically grounded worldview. Yet despite its scientific veneer, naturalism suffers from foundational incoherence that undermines its viability as a comprehensive philosophy.

    This critique demonstrates that naturalism is self-defeating, arbitrarily restrictive, explanatorily inadequate, and internally inconsistent. Each of these failings stems not from temporary limitations in scientific knowledge but from structural contradictions within naturalism itself. Together, they render naturalism philosophically untenable and point toward the necessity of a more pluralistic metaphysical framework.

    Premise 1: Self-Defeating Foundations

    Naturalism’s first fatal flaw lies in its inability to justify its own foundations without circularity or special pleading.

    Scientific inquiry rests on several non-empirical assumptions that cannot be empirically verified: the reliability of human reason, the uniformity of nature, the correspondence between our perceptions and external reality, and principles like logical consistency and parsimony. These assumptions cannot be proven through scientific methods—they are preconditions for scientific inquiry itself.

    This creates an insurmountable problem for naturalism. If reality consists entirely of natural entities governed by natural laws, then human cognition is merely the product of evolutionary processes that selected for survival value, not truth-tracking capacity. As philosopher Alvin Plantinga argues, if our cognitive faculties evolved primarily for reproductive fitness rather than truth detection, we have no reason to trust them for accurately grasping metaphysical truths like naturalism itself.

    The naturalist might counter that evolutionary adaptiveness correlates with truth-tracking, particularly regarding immediate environmental threats. But this defense fails to bridge the gap between adaptive perceptual reliability and justified abstract metaphysical beliefs. There is no evolutionary advantage to having accurate beliefs about quantum mechanics, consciousness, or cosmic origins. Natural selection has no mechanism to select for metaphysical accuracy.

    This creates what philosopher Thomas Nagel calls an “intolerable conflict” in naturalism—it relies on rational faculties that, by its own account, evolved for survival rather than metaphysical accuracy. The naturalist faces what Barry Stroud terms “irrecoverable circularity”: they must presuppose the reliability of faculties whose reliability they then try to explain through evolutionary processes.

    Even sophisticated attempts to escape this circularity through epistemic externalism merely shift the problem. Reliabilism claims beliefs formed through reliable processes are justified regardless of whether we can prove their reliability. But this begs the question: how do we establish which processes are reliable without presupposing their reliability? At some point, non-empirical axioms must be accepted on non-natural grounds.

    If naturalists retreat to pragmatism, accepting these axioms as “useful fictions” rather than truths, they have conceded that naturalism cannot justify its foundations within its own framework. This pragmatism is itself a non-empirical philosophical commitment that naturalism can neither justify nor dispense with.

    Premise 2: Arbitrary Restriction of Inquiry

    Naturalism’s second critical flaw lies in its arbitrary restriction of legitimate inquiry to natural causes alone.

    Philosophical naturalism makes an unwarranted leap from methodological naturalism (the practical scientific approach of seeking natural causes) to a metaphysical claim that only natural causes exist. This represents a category error—moving from a useful methodological heuristic to an ontological assertion without sufficient justification.

    By defining reality exclusively in terms of what natural science can study, naturalism creates a self-fulfilling prophecy: it finds only natural causes because it defines all discoverable causes as natural by definition. This circular approach prejudices investigation rather than allowing evidence to determine the boundaries of reality.

    The most powerful demonstration of this limitation is consciousness. Despite tremendous advances in neuroscience, the qualitative character of subjective experience—what philosopher Thomas Nagel calls the “what it is like” aspect of consciousness—resists reduction to physical processes. Neuroscience can correlate neural activity with reported experiences but cannot explain why these physical processes are accompanied by subjective experience at all.

    This limitation isn’t temporary but structural—scientific methods are designed to study third-person observable phenomena, not first-person subjectivity. The scientific method, by its very nature, abstracts away subjective qualities to focus on quantifiable properties. This creates what philosopher David Chalmers calls the “hard problem” of consciousness—explaining how and why physical processes give rise to subjective experience.

    Naturalists often respond by incorporating consciousness as a “fundamental” feature of an expanded natural framework—what Chalmers calls “naturalistic dualism.” But this semantic maneuver doesn’t resolve the ontological problem. If consciousness is fundamental and irreducible to physical processes, then reality includes non-physical properties—precisely what traditional naturalism denies. This exhibits what philosopher William Hasker calls “naturalism of the gaps”—expanding the definition of “natural” to encompass whatever resists reduction.

    Unlike historical examples like electromagnetism or vitalism, which were unexplained physical phenomena eventually incorporated into expanded physical frameworks, consciousness presents a categorically different challenge—explaining how subjective experience arises from objective processes. This isn’t merely an unexplained mechanism but a conceptual chasm between fundamentally different categories of reality.

    Premise 3: Explanatory Gaps

    Naturalism’s third major flaw lies in its persistent failure to explain fundamental aspects of human experience, despite centuries of scientific progress.

    Beyond consciousness, naturalism struggles to account for several phenomena central to human existence:

    Intentionality: The “aboutness” of mental states—the fact that thoughts, beliefs, and desires are about something beyond themselves—resists physical reduction. Physical states aren’t intrinsically “about” anything; they simply are. Yet our mental states exhibit this irreducible directedness toward objects, concepts, and possibilities. Philosopher Franz Brentano identified intentionality as the defining characteristic of mental phenomena, creating an explanatory gap that naturalism has failed to bridge.

    Rationality: Logical relationships between propositions aren’t physical connections but normative ones—they describe how we ought to reason, not merely how matter behaves. The laws of logic and mathematics exhibit a necessity that natural laws lack. Natural laws describe contingent regularities that could theoretically be otherwise; logical laws express necessary truths that couldn’t possibly be different. This modal difference creates another category distinction that naturalism struggles to accommodate.

    Morality: Moral imperatives involve inherent “ought” claims that cannot be derived from purely descriptive “is” statements. As philosopher G.E. Moore identified, any attempt to define moral properties in natural terms commits the “naturalistic fallacy.” Evolutionary accounts may explain the origins of moral psychology but cannot justify moral claims as true or authoritative. If moral judgments are merely evolutionary adaptations, their normative force is undermined, creating what philosopher Sharon Street calls the “Darwinian Dilemma.”

    Naturalists often respond to these gaps through eliminativism or emergentism. Eliminativism denies the reality of these phenomena, claiming they are illusions or folk-psychological confusions. But this approach is self-defeating—an illusion of consciousness must be experienced by someone, making consciousness inescapable. As philosopher John Searle notes, “Where consciousness is concerned, the appearance is the reality.”

    Emergentism fares no better. To claim consciousness “emerges” from physical processes without explaining the mechanism of emergence merely restates the mystery. Unlike other emergent properties (like liquidity emerging from H₂O molecules), consciousness involves a transition from objective processes to subjective experience—a categorical leap, not a continuous spectrum. The naturalist must explain how arrangement of non-conscious particles yields consciousness, a challenge philosopher Colin McGinn calls “cognitive closure.”

    These explanatory gaps aren’t temporary limitations in scientific knowledge but principled barriers arising from naturalism’s restricted ontology. After centuries of scientific progress, these gaps remain as profound as ever, suggesting a fundamental inadequacy in naturalism’s conceptual resources.

    Premise 4: Inconsistent Verification

    Naturalism’s fourth fatal flaw lies in its criterion for knowledge, which cannot justify itself without inconsistency.

    The naturalist privileges empirical verification—the idea that meaningful claims must be empirically testable. Yet this verification principle itself cannot be empirically verified. It is a philosophical position, not a scientific discovery. This creates an internal contradiction: if we accept only what can be demonstrated through scientific methods, we must reject the very principle that demands such verification.

    Even if naturalists reject strict verificationism, they still privilege empirical evidence above all else. Yet this privileging itself cannot be empirically justified. It’s a meta-empirical value judgment about what counts as legitimate evidence—precisely the kind of non-empirical philosophical commitment that naturalism struggles to account for.

    Attempts to resolve this inconsistency through naturalized epistemology (following Quine) don’t solve the problem—they institutionalize it. Treating epistemology as a branch of psychology assumes the reliability of the psychological methods used to study epistemology. This creates what philosopher Laurence BonJour calls “meta-justification”—how do we justify our justificatory framework? Naturalized epistemology ultimately relies on pragmatic success, but this pragmatism itself requires non-empirical criteria for what constitutes “success.”

    Even if we accept Quine’s web of belief, some strands in the web must be anchored independently of empirical verification. These include logical principles, mathematical truths, and the assumption that reality is comprehensible. These principles aren’t empirically derived but are preconditions for empirical inquiry. Their necessity reveals naturalism’s dependence on non-natural foundations.

    Naturalism thus faces an inescapable dilemma: either it consistently applies its verification standards and undermines its own foundations, or it makes special exceptions for its core principles and thereby acknowledges limits to its explanatory scope.

    The Inescapable Dilemma of Naturalism

    These four premises reveal that naturalism faces an inescapable dilemma:

    1. Strict naturalism maintains a coherent ontology (only physical entities exist) but fails to account for consciousness, intentionality, rationality, and its own foundations.
    2. Expanded naturalism accommodates these phenomena but sacrifices coherence by stretching “natural” to include fundamentally non-physical properties.

    This isn’t merely a limitation of current knowledge but a structural impossibility within naturalism’s framework. The problem isn’t that naturalism hasn’t yet explained consciousness; it’s that consciousness is categorically different from physical processes, requiring explanatory principles that transcend physical causation.

    A “richer naturalism” that embraces consciousness as fundamental, accepts non-empirical axioms pragmatically, and incorporates abstract objects has abandoned naturalism’s core thesis that reality consists entirely of natural entities subject to natural laws. This isn’t evolution of inquiry but conceptual surrender.

    Beyond Naturalism: The Case for Metaphysical Pluralism

    The most coherent alternative to naturalism is metaphysical pluralism—recognizing that reality includes physical processes, conscious experience, abstract entities, and normative truths, without reducing any to the others.

    This pluralistic approach acknowledges that different domains of reality require appropriate methods of investigation:

    1. Physical phenomena are best studied through empirical scientific methods
    2. Conscious experience requires phenomenological approaches that honor subjectivity
    3. Logical and mathematical truths demand rational analysis independent of empirical verification
    4. Normative questions involve philosophical reflection on values, not merely empirical facts

    Unlike naturalism, pluralism doesn’t face self-defeat (it can ground rational faculties non-circularly), doesn’t arbitrarily restrict inquiry (it allows appropriate methods for different domains), and doesn’t face explanatory gaps (it acknowledges irreducible categories without eliminating them).

    Naturalists often appeal to Ockham’s Razor (parsimony) and the practical success of science (pragmatism) as reasons to prefer naturalism over more metaphysically rich views like pluralism. However, as your text implicitly and explicitly argues, these critiques are problematic when leveled by the naturalist themselves, given the internal difficulties naturalism faces.

    1. Problems with the Parsimony Critique:

    • False Parsimony: Naturalism’s claim to parsimony often amounts to ontological stinginess achieved by explanatory inadequacy. It claims to be simpler by positing only one fundamental kind of “stuff” (natural/physical). However, as your text details, this simplicity is bought at the cost of being unable to adequately account for or integrate crucial aspects of reality like consciousness, intentionality, rationality, and normativity (Premises 2 & 3). A theory that is simple but leaves vast swathes of reality unexplained is not genuinely more parsimonious than a theory that posits more fundamental categories but can actually explain or accommodate all the relevant phenomena. True parsimony should be measured not just by the number of types of entities posited, but by the overall complexity of the explanatory framework required to account for the data. Pluralism, by assigning different phenomena to different appropriate categories, might require a more diverse ontology but arguably a less strained and more comprehensive explanatory structure than naturalism, which must resort to eliminativism, mysterious emergence, or redefining terms to handle outliers.
    • Redefining “Natural” Undermines Parsimony: Your text notes that naturalists trying to accommodate phenomena like consciousness might resort to calling it a “fundamental feature” within an “expanded naturalism” or “naturalistic dualism.” This is an attempt to absorb irreducible phenomena by broadening the definition of “natural.” But this move itself adds fundamental categories or properties to the naturalist ontology. If “natural” now includes irreducible subjective experience or fundamental abstract objects, the initial claim to radical simplicity (“only physical stuff”) is surrendered. This “naturalism of the gaps” (as your text puts it) demonstrates that naturalists, when pressed, do feel the need to add fundamental categories, thereby undermining their own parsimony argument against pluralism.
    • Parsimony Itself is a Non-Empirical Principle: Ockham’s Razor is a meta-scientific or philosophical principle guiding theory choice. It’s not something discovered through empirical science. As your text argues in Premise 4, naturalism struggles to justify such non-empirical principles within its own framework. If the naturalist insists that all legitimate knowledge must be empirically verifiable or grounded, they face a difficulty in appealing to a principle like parsimony, which is a criterion of theoretical virtue, not an empirical fact. Using parsimony to critique pluralism requires the naturalist to step outside their own purported empirical-only standard, or at least rely on a principle they cannot ground naturally.  

    2. Problems with the Pragmatism Critique:

    • Conflation of Methodological and Metaphysical Pragmatism: Naturalists often point to the undeniable success of science (which operates using methodological naturalism – seeking natural explanations within its domain) as evidence for metaphysical naturalism (the philosophical claim that only natural things exist). As your text argues in Premise 2, this is a category error. Methodological naturalism is pragmatic for the specific goal of studying the physical world empirically. Metaphysical naturalism is a comprehensive worldview claim. The pragmatism of the former doesn’t automatically transfer to the latter. Pluralism fully embraces methodological naturalism for understanding the physical realm but recognizes that other realms (subjective experience, logic, morality) require different, though equally valid, approaches.  
    • Pragmatism for What Purpose? If pragmatism means “what works as a comprehensive worldview,” then naturalism is arguably not pragmatic because it fails to provide a coherent or satisfactory account of fundamental aspects of human reality (consciousness, meaning, values, reason’s validity), as detailed in Premise 3. It might be pragmatic for building bridges or predicting planetary motion, but it’s arguably deeply unpragmatic for understanding what it means to be a conscious, rational, moral agent in a world with objective truths. Pluralism, by acknowledging different domains and methods, is arguably more pragmatically successful as a philosophical framework because it provides conceptual resources to engage meaningfully with the full spectrum of human experience and inquiry, not just the physically quantifiable parts.
    • Naturalism May Rely on Pragmatism for its Own Foundations: Your text suggests (Premises 1 & 4) that naturalists, when pushed on how they justify the reliability of reason or the empirical method itself, might retreat to a pragmatic defense (“these methods just work”). If naturalism must ultimately appeal to pragmatism to ground its own core principles, it’s in a weak position to then turn around and critique pluralism solely on pragmatic grounds, especially when pluralism can argue it is more pragmatically successful in making sense of all of reality. This creates a kind of “pragmatism of the gaps” where pragmatism is invoked precisely where naturalism’s internal justification fails.

    In summary, the naturalist critiques of pluralism based on parsimony and pragmatism often miss the mark. Naturalism’s parsimony is frequently achieved by ignoring significant data or by subtly expanding its ontology, undermining the claim to unique simplicity. Its appeal to pragmatism often confuses the success of scientific method (which pluralism utilizes) with the philosophical adequacy of metaphysical naturalism as a total worldview, and ignores naturalism’s own potential reliance on pragmatic grounds it cannot fully justify. Pluralism, while positing a richer ontology, can argue it offers a more genuinely explanatory parsimony and a more comprehensive pragmatism by acknowledging the irreducible complexity of reality.

    Metaphysical pluralism doesn’t entail supernaturalism or theism by necessity. One can reject both naturalism and supernaturalism by acknowledging that reality may include non-physical aspects (consciousness, mathematical truths, values) without positing supernatural entities. Philosophers like Thomas Nagel, John Searle, and David Chalmers have developed non-materialist frameworks that don’t entail theism.

    Conclusion

    Naturalism fails as a comprehensive worldview. Its success in explaining physical phenomena doesn’t justify its extension to all aspects of reality. Its persistent explanatory gaps in consciousness, rationality, and value—coupled with its inability to justify its own foundations—reveal its fundamental inadequacy.

    A truly rational approach follows evidence where it leads, even when it points beyond the boundaries of naturalistic explanation. This isn’t an abandonment of rationality but its fulfillment—acknowledging that different aspects of reality may require different modes of understanding.

    Metaphysical pluralism offers a more coherent framework that honors the multidimensional character of reality. It maintains the empirical rigor of science within its proper domain while recognizing that human experience encompasses dimensions that transcend physical reduction. In doing so, it avoids both the reductionism of strict naturalism and the supernaturalism it rightly criticizes, providing a middle path that better accounts for the full spectrum of human knowledge and experience.

  • An Argument for Agent Causation in the Origin of DNA’s Information

    An Argument for Agent Causation in the Origin of DNA’s Information

    NOTE: This is a design argument inspired by Stephen Meyer‘s design argument from DNA. Importantly, specified complexity is changed for semiotic code (which I feel is more precise) and intelligent design is changed to agent causation (which is more preferencial).

    This argument posits that the very nature of the information encoded in DNA, specifically its structure as a semiotic code, necessitates an intelligent cause in its origin. The argument proceeds by establishing two key premises: first, that semiotic codes inherently require intelligent (agent) causation for their creation, and second, that DNA functions as a semiotic code.

    Premise 1: The Creation of a Semiotic Code Requires Agent Causation (Intelligence)

    A semiotic code is a system designed for conveying meaning through the use of signs. At its core, a semiotic code establishes a relationship between a signifier (the form the sign takes, e.g., a word, a symbol, a sequence) and a signified (the concept or meaning represented). Crucially, in a semiotic code, this relationship is arbitrary or conventional, not based on inherent physical or chemical causation between the signifier and the signified. This requires an interpretive framework – a set of rules or a system – that is independent of the physical properties of the signifier itself, providing the means to encode and decode the meaning. The meaning resides not in the physical signal, but in its interpretation according to the established code.

    Consider examples like human language, musical notation, or traffic signals. The sound “stop” or the sequence of letters S-T-O-P has no inherent physical property that forces a vehicle to cease motion. A red light does not chemically or physically cause a car to stop; it is a conventionally assigned symbol that, within a shared interpretive framework (traffic laws and driver understanding), signifies a command to stop. This is distinct from a natural sign, such as smoke indicating fire. In this case, the relationship between smoke and fire is one of direct, necessary physical causation (combustion produces smoke). While an observer can interpret smoke as a sign of fire, the connection itself is a product of natural laws, existing independently of any imposed code or interpretive framework.

    The capacity to create and utilize a system where arbitrary symbols reliably and purposefully convey specific meanings requires more than just physical processes. It requires the ability to:

    Conceive of a goal: To transfer specific information or instruct an action.

    Establish arbitrary conventions: To assign meaning to a form (signifier) where no inherent physical link exists to the meaning (signified).

    Design an interpretive framework: To build or establish a system of rules or machinery that can reliably encode and decode these arbitrary relationships.

    Implement this system for goal-directed action: To use the code and framework to achieve the initial goal of information transfer and subsequent action based on that information.

    This capacity to establish arbitrary, rule-governed relationships for the purpose of communication and control is what we define as intelligence in this context. The creation of a semiotic code is an act of imposing abstract order and meaning onto physical elements according to a plan or intention. Such an act requires agent causation – causation originating from an entity capable of intentionality, symbolic representation, and the design of systems that operate based on abstract rules, rather than solely from the necessary interactions of physical forces (event causation).

    Purely natural, undirected physical processes can produce complex patterns and structures driven by energy gradients, chemical affinities, or physical laws (like crystal formation, which is a direct physical consequence of electrochemical forces and molecular structure, lacking arbitrary convention, an independent interpretive framework, or symbolic representation). However, they lack the capacity to establish arbitrary conventions where the link between form and meaning is not physically determined, nor can they spontaneously generate an interpretive framework that operates based on such non-physical rules for goal-directed purposes. Therefore, the existence of a semiotic code, characterized by arbitrary signifier-signified links and an independent interpretive framework for goal-directed information transfer, provides compelling evidence for the involvement of intelligence in its origin.

    Premise 2: DNA Functions as a Semiotic Code

    The genetic code within DNA exhibits the key characteristics of a semiotic code as defined above. Sequences of nucleotides (specifically, codons on mRNA) act as signifiers. The signifieds are specific amino acids, which are the building blocks of proteins.

    Crucially, the relationship between a codon sequence and the amino acid it specifies is not one of direct chemical causation. A codon (e.g., AUG) does not chemically synthesize or form the amino acid methionine through a direct physical reaction dictated by the codon’s molecular structure alone. Amino acid synthesis occurs through entirely separate biochemical pathways involving dedicated enzymes.

    Instead, the codon serves as a symbolic signal that is interpreted by the complex cellular machinery of protein synthesis – the ribosomes, transfer RNAs (tRNAs), and aminoacyl-tRNA synthetases. This machinery constitutes the interpretive framework.

    Here’s how it functions as a semiotic framework:

    • Arbitrary/Conventional Relationship: The specific assignment of a codon triplet to a particular amino acid is largely a matter of convention. While there might be some historical or biochemical reasons that biased the code’s evolution, the evidence from synthetic biology, where scientists have successfully engineered bacteria with different codon-amino acid assignments, demonstrates that the relationship is not one of necessary physical linkage but of an established (and in this case, artificially modified) rule or convention. Different codon assignments could work, but the system functions because the cellular machinery reliably follows the established rules of the genetic code.
    • Independent Interpretive Framework: The translation machinery (ribosome, tRNAs, synthetases) is a complex system that reads the mRNA sequence (signifier) and brings the correct amino acid (signified) to the growing protein chain, according to the rules encoded in the structure and function of the tRNAs and synthetases. The meaning (“add this amino acid now”) is not inherent in the chemical properties of the codon itself but resides in how the interpretive machinery is designed to react to that codon. This machinery operates independently of direct physical causation by the codon itself to create the amino acid; it interprets the codon as an instruction within the system’s logic.
    • Symbolic Representation: The codon stands for an amino acid; it is a symbol representing a unit of meaning within the context of protein assembly. The physical form (nucleotide sequence) is distinct from the meaning it conveys (which amino acid to add). This is analogous to the word “cat” representing a feline creature – the sound or letters don’t physically embody the cat but symbolize the concept.

    Therefore, DNA, specifically the genetic code and the translation system that interprets it, functions as a sophisticated semiotic code. It involves arbitrary relationships between signifiers (codons) and signifieds (amino acids), mediated by an independent interpretive framework (translation machinery) for the purpose of constructing functional proteins (goal-directed information transfer).

    Conclusion: Therefore, DNA Requires Agent Causation in its Origin

    Based on the premises established:

    1. The creation of a semiotic code, characterized by arbitrary conventions, an independent interpretive framework, and symbolic representation for goal-directed information transfer, requires the specific capacities associated with intelligence and agent causation (intentionality, abstraction, rule-creation, system design).
    2. DNA, through the genetic code and its translation machinery, functions as a semiotic code exhibiting these very characteristics.

    It logically follows that the origin of DNA’s semiotic structure requires agent causation. The arbitrary nature of the code assignments and the existence of a complex system specifically designed to read and act upon these arbitrary rules, independent of direct physical necessity between codon and amino acid, are hallmarks of intelligent design, not the expected outcomes of undirected physical or chemical processes.

    Addressing Potential Objections:

    • Evolution and Randomness: While natural selection can act on variations in existing biological systems, it requires a self-replicating system with heredity – which presupposes the existence of a functional coding and translation system. Natural selection is a filter and modifier of existing information; it is not a mechanism for generating a semiotic code from scratch. Randomness, by definition, lacks the capacity to produce the specified, functional, arbitrary conventions and the integrated interpretive machinery characteristic of a semiotic code. The challenge is not just sequence generation, but the origin of the meaningful, rule-governed relationship between sequences and outcomes, and the system that enforces these rules.
    • “Frozen Accident” and Abiogenesis Challenges: Hypotheses about abiogenesis and early life (like the RNA world) face significant hurdles in explaining the origin of this integrated semiotic system. The translation machinery is a highly complex and interdependent system (a “chicken-and-and egg” problem where codons require tRNAs and synthetases to be read, but tRNAs and synthetases are themselves encoded by and produced through this same system). The origin of the arbitrary codon-amino acid assignments and the simultaneous emergence of the complex machinery to interpret them presents a significant challenge for gradual, undirected assembly driven solely by chemical or physical affinities.
    • Biochemical Processes vs. Interpretation: The argument does not claim that a ribosome is a conscious entity “interpreting” in the human sense. Instead, it argues that the system it is part of (the genetic code and translation machinery) functions as an interpretive framework because it reads symbols (codons) and acts according to established, arbitrary rules (the genetic code’s assignments) to produce a specific output (amino acid sequence), where this relationship is not based on direct physical necessity but on a mapping established by the code’s design. This rule-governed, symbolic mapping, independent of physical causation between symbol and meaning, is the defining feature of a semiotic code requiring an intelligence to establish the rules and the system.
    • God-of-the-Gaps: This argument is not based on mere ignorance of a natural explanation. It is a positive argument based on the nature of the phenomenon itself. Semiotic codes, wherever their origin is understood (human language, computer code), are the products of intelligent activity involving the creation and implementation of arbitrary conventions and interpretive systems for goal-directed communication. The argument posits that DNA exhibits these defining characteristics and therefore infers a similar type of cause in its origin, based on a uniformity of experience regarding the necessary preconditions for semiotic systems.

    In conclusion, the sophisticated, arbitrary, and rule-governed nature of the genetic code and its associated translation machinery point to it being a semiotic system. Based on the inherent requirements for creating such a system—namely, the capacities for intentionality, symbolic representation, rule-creation, and system design—the origin of DNA’s information is best explained by the action of an intelligent agent.

  • Chromosome 2 Fusion: Evidence Out Of Thin Air?

    Chromosome 2 Fusion: Evidence Out Of Thin Air?

    The story is captivating and frequently told in biology textbooks and popular science: humans possess 46 chromosomes while our alleged closest relatives, chimpanzees and other great apes, have 48. The difference, evolutionists claim, is due to a dramatic event in our shared ancestry – the fusion of two smaller ape chromosomes to form the large human Chromosome 2. This “fusion hypothesis” is often presented as slam-dunk evidence for human evolution from ape-like ancestors. But when we move beyond the narrative and scrutinize the actual genetic data, does the evidence hold up? A closer look suggests the case for fusion is far from conclusive, perhaps even bordering on evidence conjured “out of thin air.”

    The fusion model makes specific predictions about what we should find at the junction point on Chromosome 2. If two chromosomes, capped by protective telomere sequences, fused end-to-end, we’d expect to see a characteristic signature: the telomere sequence from one chromosome (repeats of TTAGGG) joined head-to-head with the inverted telomere sequence from the other (repeats of CCCTAA). These telomeric repeats typically number in the thousands at chromosome ends.  

    The Missing Telomere Signature

    When scientists first looked at the proposed fusion region (locus 2q13), they did find some sequences resembling telomere repeats (IJdo et al., 1991). This was hailed as confirmation. However, the reality is much less convincing than proponents suggest.

    Instead of thousands of ordered repeats forming a clear TTAGGG…CCCTAA structure, the site contains only about 150 highly degraded, degenerate telomere-like sequences scattered within an ~800 base pair region. Searching a much larger 64,000 base pair region yields only 136 instances of the core TTAGGG hexamer, far short of a telomere’s structure. Crucially, the orientation is often wrong – TTAGGG motifs appear where CCCTAA should be, and vice-versa. This messy, sparse arrangement hardly resembles the robust structure expected from even an ancient, degraded fusion event.

    Furthermore, creationist biologist Dr. Jeffrey Tomkins discovered that this alleged fusion site is not merely inactive debris; it falls squarely within a functional region of the DDX11L2 gene, likely acting as a promoter or regulatory element (Tomkins, 2013). Why would a supposedly non-functional scar from an ancient fusion land precisely within, and potentially regulate, an active gene? This finding severely undermines the idea of it being simple evolutionary leftovers.

    The Phantom Centromere

    A standard chromosome has one centromere. Fusing two standard chromosomes would initially create a dicentric chromosome with two centromeres – a generally unstable configuration. The fusion hypothesis thus predicts that one of the original centromeres must have been inactivated, leaving behind a remnant or “cryptic” centromere on Chromosome 2.  

    Proponents point to alpha-satellite DNA sequences found around locus 2q21 as evidence for this inactivated centromere, citing studies like Avarello et al. (1992) and the chromosome sequencing paper by Hillier et al. (2005). But this evidence is weak. Alpha-satellite DNA is indeed common near centromeres, but it’s also found abundantly elsewhere throughout the genome, performing various functions.  

    The Avarello study, conducted before full genome sequencing, used methods that detected alpha-satellite DNA generally, not functional centromeres specifically. Their results were inconsistent, with the signal appearing in less than half the cells examined – hardly the signature of a definite structure. Hillier et al. simply noted the presence of alpha-satellite tracts, but these specific sequences are common types found on nearly every human chromosome and show no unique similarity or phylogenetic clustering with functional centromere sequences. There’s no compelling structural or epigenetic evidence marking this region as a bona fide inactivated centromere; it’s simply a region containing common repetitive DNA.

    Uniqueness and the Mutation Rate Fallacy

    Adding to the puzzle, the specific short sequence often pinpointed as the precise fusion point isn’t unique. As can be demonstrated using the BLAT tool, this exact sequence appears on human Chromosomes 7, 19, and the X and Y chromosomes. If this sequence is the unique hallmark of the fusion event, why does it appear elsewhere? The evolutionary suggestion that these might be remnants of other, even more ancient fusions is pure speculation without a shred of supporting evidence.

    The standard evolutionary counter-argument to the lack of clear telomere and centromere signatures is degradation over time. “The fusion happened millions of years ago,” the reasoning goes, “so mutations have scrambled the evidence.” However, this explanation crumbles under the weight of actual mutation rates.

    Using accepted human mutation rate estimates (Nachman & Crowell, 2000) and the supposed 6-million-year timeframe since divergence from chimps, we can calculate that the specific ~800 base pair fusion region would be statistically unlikely to have suffered even one mutation during that entire period! The observed mutation rate is simply far too low to account for the dramatic degradation required to turn thousands of pristine telomere repeats and a functional centromere into the sequences we see today. Ironically, the known mutation rate argues against the degradation explanation needed to salvage the fusion hypothesis.

    Common Design vs. Common Ancestry

    What about the general similarity in gene order (synteny) between human Chromosome 2 and chimpanzee chromosomes 2A and 2B? While often presented as strong evidence for fusion, similarity does not automatically equate to ancestry. An intelligent designer reusing effective plans is an equally valid, if not better, explanation for such similarities. Moreover, the “near identical” claim is highly exaggerated; large and significant differences exist in gene content, control regions, and overall size, especially when non-coding DNA is considered (Tomkins, 2011, suggests overall similarity might be closer to 70%). This makes sense when considering that coding regions function to provide the recepies for proteins (which similar life needs will share similarly).

    Conclusion: A Story Of Looking for Evidence

    When the genetic data for human Chromosome 2 is examined without the pre-commitment to an evolutionary narrative, the evidence for the fusion event appears remarkably weak. So much so that it begs the question, was this a mad-dash to explain the blatent differences in the genomes of Humans and Chimps? The expected telomere signature is absent, replaced by a short, jumbled sequence residing within a functional gene region. The evidence for a second, inactivated centromere relies on the presence of common repetitive DNA lacking specific centromeric features. The supposed fusion sequence isn’t unique, and known mutation rates are woefully insufficient to explain the degradation required by the evolutionary model over millions of years.

    The chromosome 2 fusion story seems less like a conclusion drawn from compelling evidence and more like an interpretation imposed upon ambiguous data to fit a pre-existing belief in human-ape common ancestry. The scientific data simply does not support the narrative. Perhaps it’s time to acknowledge that the “evidence” for this iconic fusion event may indeed be derived largely “out of thin air.”

    References: