Creation Questions

Tag: Philosophy

  • The Idealist Argument from Contingency

    The Idealist Argument from Contingency

    Introduction: Observing Ex Nihilo Creation

    As I have been promoting the Kalam cosmological argument, I’ve been thinking deeply about its particular criticisms. To be clear, most criticisms of Craig’s Kalam fail, however some are fascinating and get you thinking about the particulars such as what existence means and whether ex nihilo (out of nothing) is an ontologically distinct kind of creation which we don’t observe.

    On one hand, most proponents of the Kalam are perfectly willing to grant that we don’t observe ex nihilo creation and redirect the skeptic to the metaphysical entailments of creation (usually from the principle of sufficient reason), suggesting that the universe, and all things which have ontology in and of themselves, do need efficient causes. Yet, I really don’t think we need to cede ground here. As I’ve meditated on this, I’ve come to the conclusion that we do in fact observe ex nihilo creations—from our minds.

    What do I mean by this? Well, take any concept of a “thing”, let’s say a wooden chair (it’s the favorite of philosophers), and ask ourselves how it is that this thing exists in the “real” world. When we examine a chair carefully, we discover something remarkable: the chair as a unified object—as a chair—does not exist in the physical substrate at all. What exists physically are atoms arranged in a particular configuration. The “chairness” of this arrangement, the ontological unity that makes these atoms one thing rather than billions of separate things, is something imposed by mind. In this sense, we observe minds creating genuine ontological categories ex nihilo—not creating the matter itself, but creating the very thingness that makes a collection of particles into a unified object.

    This realization leads to a profound philosophical argument that I believe has been insufficiently explored in contemporary philosophy of religion.

    The Nature of Composite Objects

    We land on a few interesting features when we examine any purported “thing” in the material world. For one, a thing is instantiated in the world separate from its physical parts. This chair, for instance, may be made of wood, but many metals, plastics, and fabrics can be substituted and the identity of a thing within a category (or genus) is not changed. There is something higher than just mere components which brings the composition into a unified whole.

    But what is this “something higher”? The materialist wants to say it’s just the arrangement of particles. But this raises immediate problems. Consider: when exactly does a collection of wood atoms become a chair? When the carpenter has assembled 50% of the pieces? 75%? 90%? What if one leg is broken—is it still a chair, or merely chair-shaped atoms? What if the leg is cracked but still functional? The materialist has no principled answer to these questions because “chairness” is not a property that can be reduced to particle arrangements.

    The problem becomes even clearer when we consider boundaries. A chair has clear boundaries to us—we know where the chair ends and the floor begins. But at the atomic level, there are no such boundaries. Atoms are constantly exchanging electrons, being shed and replaced. Air molecules intermingle with the chair’s molecules at the surface. There is no physical demarcation that says “here the chair ends.” The boundaries we perceive (form) are imposed by our minds based on function and purpose.

    This leads to several different possible conclusions about where a “thing” must be sustained. We are asking where something really exists, ontologically speaking. To be precise, there are three exhaustive options: (1) the thing is sustained in a domain of itself (like Platonic Forms), (2) the thing is sustained in the material domain (by physics and chemistry alone), (3) the thing is sustained in the mental domain (by a mind). I offer the reader to consider alternate hypotheses and notice that these choices really do cover the gamut.

    The Trilemma of Ontology

    Let us examine each option in turn to see which can bear the weight of explanation.

    Option 1: Material Sustenance (Reductionist Materialism)

    For the materialist position, we run into the logical contradiction of unified-composite objects. The materialist must assume that composite objects, like a rock, have no inherent boundaries. Physical things are mere indifferentiable clusters of atoms. From here, the materialist has two options. They can either accept a form of object nihilism, where no composite objects actually exist, or they can turn to a nominalistic approach.

    In regards to nominalism, we must ask: what is the reason we would call a rock “rock” if separate from its ontology or it actually being a rock? If things, like a rock, exist in name only, then they do not really exist within distinct categories or kinds. This renders their definitions completely meaningless, because a good definition requires classification within the context of genus-species relationships. If things really exist as distinct objects, it is only because we have determined some aspect of their ontology over and above what reductionism or materialism can explain. So in reality, there is no sustainable nominalist approach for the materialist: one is either an object nihilist, or one must accept that real things are established some other way.

    It seems to me that something like a rock is a perfect example of what would be impossible to be established as ontologically distinct without a mind. Is a pebble a rock? Is a handful of sand many small pebbles? Why do we call a variant quantity of small rocks a singular category? Why do we delineate between singular grains of sand and groups of pebbles? Is it not an arbitrary size distinction relative to our observational abilities and purposes?

    For another example, consider why people groups such as Inuit tribes, who live in snowy environments, have many particular names for snow, whereas those tribes who live near the equator do not. It is because words are conventions within social groups to establish meaningful concepts. To someone who may see snow one day of the year, different textures and variations of snow are not meaningfully distinct. All composite objects that exist—including the very words that I am writing—are things minds have established as meaningful and bounded.

    Therefore, a rock is meaningfully different from a pebble and a group of pebbles from sand only insofar as our use or intent dictates. Our experience of snow presupposes our naming conventions of snow. If you learn a language with seven words for snow, but you have always lived in a desert, you will not suddenly understand snow differently—you need to experience snow differently first.

    But the materialist might object: “Even if our labels are arbitrary, the physical arrangements are real. When I sit in a chair, something physical holds me up.” This is true, but it misses the point. Yes, atoms arranged in a certain configuration will bear weight. But those atoms bearing weight is not the same as a chair existing. The chair, as a unified object with identity over time, with the capacity to be the same chair even if we replace parts, with clear boundaries—this is not present in the physical substrate. It is a mental construct imposed on that substrate.

    Consider the philosophical puzzle of the Ship of Theseus. If we replace every plank of a ship, one by one, is it the same ship? The puzzle has no answer in purely physical terms because the ship’s identity is not a physical property. Identity over time, unity, and boundaries are all features imposed by minds, not discovered in matter.

    If you accept Object Nihilism for composite objects and argue for a fundamental realist view where only quarks and leptons (or quantum fields) exist, then you face equally severe problems. What is your evidence that you exist ontologically? An entity which doesn’t exist as a unified object cannot consistently argue that some things do exist as unified objects. Moreover, what is your basis for assuming you know the “stuff” which is fundamental to reality? Even the quantum field is not necessarily the bottom line. Who can say what energy ultimately is? What’s to say that what’s fundamental isn’t also mind-contingent? That it isn’t mathematical in nature—which would itself require mental grounding?

    This view has made a distinction where everything composite is nominal except for something that has never been directly observed as a truly fundamental “thing.” How does one justify this distinction in the first place? It seems to me a contradiction in reasoning to deny mind-dependent categories for composite objects while affirming mind-independent categories for fundamental particles. Both require the same kind of ontological boundary-drawing that only minds can provide.

    Option 2: Self-Sustaining Forms (Platonism)

    From here, a skeptic might say, “Okay, the chair or rock isn’t purely material. But maybe it’s just a Platonic Form. It sustains itself in an abstract realm. Why do we need a Mind?”

    This is a more sophisticated response, but it ultimately fails for several reasons.

    First, abstract objects have no causal power. A Platonic Form of “chairness” cannot reach down into the physical world and organize atoms into a chair configuration. It cannot explain why this particular collection of atoms instantiates the form rather than some other collection. The relationship between abstract forms and concrete particulars remains deeply mysterious in Platonic metaphysics—so mysterious that even Plato himself struggled with it in dialogues like the Parmenides.

    Second, and more fundamentally, it is unintelligible to think of abstract objects like propositions, mathematical truths, or forms existing without a mind to think them. As Alvin Plantinga has argued, propositions are the contents of thoughts. They are the sort of thing that exists in minds. To say they exist “on their own” in some abstract realm is to commit a category error—it’s like saying colors exist independently of anything colored, or that motion exists independently of anything moving.

    Consider what a Platonic Form would have to be: a truth, a concept, a logical structure. But these are precisely the kinds of things that exist as thoughts. A thought cannot exist without a thinker any more than a dance can exist without a dancer. The Platonist wants to affirm that 2+2=4 exists eternally and necessarily, and I agree. But this truth exists as an eternal thought in an eternal mind, not as a free-floating abstraction.

    Third, many Platonic forms presuppose relationships, which themselves presuppose minds. Take the concept of justice. Justice involves right relations between persons. But “right relations” is an inherently normative concept that makes no sense without minds capable of recognizing and valuing those relations. Or consider mathematical sets. A set is defined by a rule of membership—a mental act of grouping things together according to a criterion. Sets don’t group themselves.

    Therefore, if the “Blueprint” of the universe is real—if there truly are eternal structures, categories, and forms that ground the intelligibility of reality—these cannot be free-floating abstract objects. They must be Divine Thoughts, eternally sustained in a Divine Mind.

    Option 3: Mental Sustenance (Idealism)

    This leaves us with the third option: composite objects exist insofar as they are sustained by minds. This may sound counterintuitive at first, but it’s the only option that avoids the contradictions of the previous two.

    When a carpenter builds a chair, he doesn’t merely arrange atoms—he imposes a conceptual unity on those atoms. He creates boundaries where there were none. He establishes identity conditions (this is one chair, not four separate legs plus a seat plus a back). He determines a function and purpose that gives meaning to the configuration. All of these acts are mental, not physical.

    But here’s the crucial question: once the carpenter stops thinking about the chair, does it cease to exist? In one sense, yes—the carpenter’s mind is no longer actively sustaining it. But in another sense, no—the chair continues to be recognized as a chair by other minds. As long as someone conceptualizes those atoms as a unified object called “chair,” it exists as such.

    This actually goes back to Bishop George Berkeley’s famous argument: “If a tree falls in the woods and no one is there to hear it, does it make a sound?” In a sense, if we stipulate that there is no wildlife and trees lack the ability to register sound frequencies, the fall really does not make a sound. This is because sound is a perception, a mental phenomenon. There are pressure waves in the air, certainly, but “sound” as we experience it requires a mind to interpret those waves.

    However, Berkeley went further than this, and so must we. Berkeley argued that material objects continue to exist when no human observes them because God’s mind perpetually perceives them. I want to make a similar but distinct claim: composite objects, categories, and the conceptual structure that makes reality intelligible all require perpetual mental sustenance. Not just observation, but active ontological grounding.

    An analogy may help: consider an author writing a novel. The characters in the novel have a kind of existence—they’re not nothing. But their existence is entirely dependent on the author’s creative act and the mind of any reader engaging with them. If every copy of the book were destroyed and everyone forgot the story, the characters would cease to exist in any meaningful sense. They have no “existential inertia” apart from minds sustaining them.

    I propose that composite objects in our world are similar. The atoms may have mind-independent existence (though even this is debatable), but the chairness—the unified object with boundaries, identity, and purpose—exists only in minds. And since these objects continue to exist even when finite human minds aren’t thinking about them, they must be sustained by an infinite, omnipresent Mind.

    The Formal Argument

    All this contemplation leads me to the first formulation of a new kind of contingency argument which I call the Argument from Ontological Sustenance (or Idealist Argument from Contingency):

    Premise 1: All composite objects require a mind to sustain their ontology.

    Premise 2: The universe is a composite object.

    Conclusion: Therefore, the universe requires a mind to sustain its ontology.

    This is a logically valid argument, meaning if the premises are true, the conclusion must be as well.

    The first premise has been defended at length above. The key insight is that composite objects—things made of parts organized into a unity—have no ontological status in the physical substrate alone. Their unity, boundaries, and identity exist only as mental constructs.

    The second premise should be relatively uncontroversial. The universe is composed of parts (galaxies, stars, planets, particles) organized into a whole. It has boundaries (even if those boundaries are the limits of spacetime itself). It has an identity that persists through time. All of these features require the same kind of mental grounding that chairs and rocks require.

    Therefore, the universe itself must be sustained in its existence as a unified, bounded entity by a mind. And since the universe contains all finite minds, this sustaining mind must be transcendent—beyond the universe, not part of it.

    Why Not Pantheism?

    An obvious objection arises: couldn’t the universe itself be the Mind that sustains all these categories? This would be a pantheistic solution—identifying God with the universe itself rather than positing a transcendent deity.

    This fails for several reasons:

    Step 1: A mind is a container for concepts. It is the sort of thing that has thoughts, holds ideas, and maintains logical relationships between propositions.

    Step 2: Necessary truths (logic, mathematics, metaphysics) exist outside our finite minds. We discover them; we don’t invent them. This implies a Greater Mind contains them.

    Step 3: Could this Greater Mind be the Universe itself?

    Refutation: No. A “Universe Mind” would be composed of parts (galaxies, energy fields, quantum states) and subject to entropy (time, change, decay). But anything composed of parts is contingent—dependent on those parts and their organization. Anything subject to entropy requires external sustenance or an explanation for why it continues to exist through change.

    Moreover, the universe is precisely the kind of composite object that needs mental grounding. To say the universe grounds its own categories is circular—it’s like saying a novel writes itself, or a dance choreographs itself.

    Conclusion: The Ultimate Sustainer cannot be the Universe. It must be Transcendent (distinct from creation) and Non-Contingent (self-existent, not dependent on anything external to itself).

    The Divine Attributes

    Once we establish that a Transcendent, Non-Contingent Mind sustains all reality, we can derive further attributes through the classical logic of Act and Potency (pure actuality).

    Premise: A Non-Contingent Mind has no external cause, and therefore no external limitations or deficiencies. It is “Pure Act”—fully realized, with no unrealized potential.

    Omnipotence

    To possess “some” power but not “all” power is to have a limitation—an unrealized potential to do more. But a Non-Contingent Being has no unrealized potentials by definition. Nothing external limits what it can do. Therefore, it possesses all power—omnipotence.

    Omniscience

    Ignorance is a lack, a privation of knowledge. A Fully Realized Mind has no lacks or privations. Moreover, if this Mind sustains all reality through its thoughts, it must know everything it sustains—otherwise, how could it sustain it? Therefore, it knows all things—omniscience.

    Omnibenevolence

    Evil, in the classical metaphysical tradition, is a privation—a lack of goodness or being. It is not a positive reality but an absence, like cold is the absence of heat or darkness the absence of light. Since this Mind is Fully Realized Being with no privations, it contains no evil. It is Pure Goodness—omnibenevolence.

    Eternity and Immutability

    Change implies potentiality—the ability to become something one is not yet. But a Non-Contingent Being has no potentiality. Therefore, it does not change. It exists eternally in a timeless present, not subject to temporal succession.

    Personhood

    This Mind thinks, knows, and creates categories. These are the activities of a person, not an impersonal force. Moreover, the categories it sustains include moral values, relational properties, and purposes—all of which presuppose personhood. Therefore, this Being is personal.

    The Christian Specificity

    We have now established the existence of a Transcendent, Omnipotent, Omniscient, Omnibenevolent, Eternal, Personal Mind that sustains all reality. This is recognizably the God of classical theism. But can we go further and identify this God with the specific God of Christianity?

    The Argument from Relational Necessity

    Premise 1: A God who is Personal, Truthful, and Loving is inherently Relational. Love seeks connection; truth seeks to be known; personhood seeks communion.

    Premise 2: To be fully known and to establish a perfect relationship with finite creatures, this Infinite God must bridge the ontological gap. He cannot remain purely transcendent and abstract.

    Consider: if God is perfectly loving, His love must be expressed, not merely potential. If God is truth, He must reveal Himself, not remain hidden. If God is personal, He must enter into relationship with persons He has created. But finite creatures cannot reach up to an infinite God—the ontological distance is too vast. Therefore, God must reach down to us.

    The Filter

    With this criterion, we can evaluate the world’s major religious traditions:

    Deism/Pantheism: These fail immediately because they offer no relationship. Deism presents a God who creates and withdraws. Pantheism identifies God with the universe, making genuine relationship impossible.

    Unitarian Monotheism (Islam/Judaism): These traditions affirm God’s transcendence and offer prophetic revelation—books and laws sent from on high. But God remains fundamentally separate. He sends messages but does not cross the boundary to unite with creation. The relationship is external, mediated through texts and commandments, never achieving full intimacy or union.

    Christianity: This succeeds as the only worldview where the Sustainer becomes the Sustained. In the doctrine of the Incarnation, God doesn’t merely send a message about Himself—He enters history as a human being. The Infinite becomes finite. The Creator becomes a creature. The Mind that sustains all reality subjects Himself to the very categories He created.

    This is not merely unique—it’s philosophically necessary. If God is to bridge the ontological gap between infinite and finite, between Creator and creature, He must do so by becoming both. The Incarnation is the only way for perfect relationship to be achieved.

    Verification Through Human Experience

    The Christian worldview also uniquely and truthfully describes the human condition. We experience ourselves as simultaneously possessing great dignity (made in God’s image, capable of reason and love) and great depravity (prone to selfishness, cruelty, and irrationality). We long for meaning, purpose, and redemption, yet find ourselves unable to achieve these on our own.

    Christianity explains this through the doctrine of the Fall and offers a solution through Redemption—not by our own efforts, but by God’s gracious action in Christ. This narrative aligns with both our philosophical conclusions about God’s nature and our existential experience of ourselves.

    Conclusion

    The Mind that sustains the rock, the chair, and every composite object in reality is the same Mind that entered the world as Jesus of Nazareth. From the seemingly simple question “What makes a chair a chair?” we have traced a path to the central truth of Christianity: God is not distant or abstract, but intimately involved in every aspect of reality, from the smallest pebble to the vast cosmos, from the categories that make thought possible to the incarnate life that makes redemption possible.

    This is the Argument from Ontological Sustenance. Like all philosophical arguments, it invites scrutiny, challenges, and further refinement. But I believe it opens a fruitful path for natural theology—one that begins not with cosmological speculation about the universe’s beginning, but with careful attention to the ontological structure of everyday objects and the categories that make them intelligible.

    Every time we recognize a chair as a chair, a rock as a rock, or the universe as a cosmos, we are implicitly acknowledging the work of the Divine Mind that makes such recognition possible.

  • The Irreducibility of Life

    The Irreducibility of Life

    In his paper “Life Transcending Physics and Chemistry,” Michael Polanyi examines biological machines in a way that illuminates the explanatory failures of materialism. The prevailing materialist paradigm that life can be fully explained by the laws of inanimate nature fails to account for higher ordered realities which have operations and structures that involve non-material judgements and interpretations. He specifically addresses the views of scientists such as Francis Crick, who, along with James Watson, argued for a total reductionist and nominalist view based on their discovery of DNA. For Polanyi, there is a life-transcending nature that all biological organisms have which is akin to machines and their transcendent properties. His central argument is based on the concept of “boundary control,” which argues the notion that there are laws that govern physical reactions (as Crick would accept) yet there are particular laws of form and function which are unique and separate from those lower-level laws.

    There is a real clash between Polanyi’s position and the reductionist/nominalist position which is commonly held by molecular biologists. To start to broach this divergence he explains how the contemporaneous discovery of the genetic function of DNA was interpreted as the final blow to vitalist thought within sciences. He writes:

    “The discovery by Watson and Crick of the genetic function of DNA (deoxyribonucleic acid), combined with the evidence these scientists provided for the self-duplication of DNA, is widely held to prove that living beings can be interpreted, at least in principles, by the laws of physics and chemistry.”

     Polanyi explicitly rejects Crick’s interpretation; that position is of the mainstream and popular level academia. Crick states that his principle “has so far been accepted by few biologists and has been sharply rejected by Francis Crick, who is convinced that all life can be ultimately accounted for by the laws of inanimate nature.” This same sentiment can indeed be found in Crick’s book “Molecules and Man.” Crick writes the following:

    “Thus eventually one may hope to have the whole of biology “explained” in terms of the level below it, and so on right down to the atomic level.”

    To dismantle the materialist argument, Polanyi utilizes the analogy of a machine. A machine cannot be defined or understood solely through the physical and chemical properties of its materials. Take a watch and put it into a machine that can read a detailed atomic map of the device: can even the best chemist give any coherent reason as to whether the watch is functioning or not? Worse—can one even tell you what a watch is, if all that exists is matter in motion for no particular reason? Polanyi writes it best:

    “A complete physical-chemical topography of my watch—even though the topography included the changes caused by the movements in the watch—would not tell us what this object is. On the other hand, if we know watches, we would recognize an object as a watch by a description of it which says that it tells the time of the day… We know watches and can describe one only in terms like ‘telling the time,’ ‘hands,’ ‘face,’ ‘marked,’ which are all incapable of being expressed by the variables of physics, length, mass, and time.”

    Once you see this distinction, you are invariably led (as Polanyi was) to two unique substratum of explanation; what he calls the concept of dual control. Obviously, there are physical laws which dictate constraints and operations of all material and all material things can be explained by these very laws. However, those laws are only meaningfully called constraints when there is some notion of intention or design to be constrained. The shape of any machine, man-made or biological, is not determined by natural laws. Not only is it not determined by them, it cannot be determined by them in any way. Polanyi elaborates on this relationship:

    “The machine is a machine by having been built and being then controlled according to principles of engineering. The laws of physics and chemistry are indifferent to these principles; they would go on working in the fragments of the machine if it were smashed. But they serve the machine while it lasts; machines rely for their operations always on the laws of physics and chemistry.”

    As I hinted at before, Polanyi also applies this logic to biological systems, arguing that morphology is a boundary condition in the same way that a design of a machine is a boundary condition. Biology cannot be reduced to physics because the structure that defines a living being is not the result of physical-chemical equilibration. Physical laws do not intend to create nor do they care that anything functions. Instead, “biological principles are seen then to control the boundary conditions within which the forces of physics and chemistry carry on the business of life.”

    Where Polanyi and Crick truly have the disagreement, then, is in their interpretation of the explanatory power of nature and how DNA is implicated within these frameworks. While Crick views DNA as a chemical agent that proves reducibility, Polanyi argues that the very nature of DNA as an information carrier proves the opposite. For a molecule to function as a code, its sequence cannot be determined by chemical necessity. If chemical laws dictated the arrangement of the DNA molecule, it would be a rigid crystal incapable of conveying complex, variable information. Polanyi writes:

    “Thus in an ideal code, all alternative sequences being equally probable, its sequence is unaffected by chemical laws, and is an arithmetical or geometrical design, not explicable in chemical terms.”

    By treating DNA as a transmitter of information, Polanyi aligns it with other non-physical forms of communication, such as a book. The physical chemistry of the ink and paper does not explain the content of the text. Similarly, the chemical properties of DNA do not explain the genetic information it carries. Polanyi contends that Crick’s own theory inadvertently supports this non-materialist conclusion:

    “The theory of Crick and Watson, that four alternative substituents lining a DNA chain convey an amount of information approximating that of the total number of such possible configurations, amounts to saying that the particular alignment present in a DNA molecule is not determined by chemical forces.”

    Therefore, the pattern of the organism, derived from the information in DNA, represents a constraint that physics cannot explain. It is a boundary condition that harnesses matter. Polanyi concludes that the organization of life is a specific, highly improbable configuration that transcends the laws governing its atomic constituents:

    “When this structure reappears in an organism, it is a configuration of particles that typifies a living being and serves its functions; at the same time, this configuration is a member of a large group of equally probable (and mostly meaningless) configurations. Such a highly improbable arrangement of particles is not shaped by the forces of physics or chemistry. It constitutes a boundary condition, which as such transcends the laws of physics and chemistry.”

    In this way, Polanyi refutes the nominalist materialist perspective by demonstrating that the governing principles of life—its form, function, and information content—are logically distinct from, and irreducible to, the physical laws that govern inanimate matter. Physical laws are, then, merely a piece of the puzzle of the explanation. What’s more, they are insufficient to account for the existence of particular organizations of matter which physical laws and chemistry are not determinative of.

  • Human Eyes – Optimized Design

    Human Eyes – Optimized Design

    Is the human eye poorly designed? Or is it optimal?

    If you ask most proponents of modern evolutionary theory, you will often hear that the eye is a pinnacle of unfortunate evolutionary history and dysteleology.

    There are three major arguments that are used in defending this view:

    The human eye:

    1. is inverted (retina) and wired backwards
    2. has a blind spot due to nerve exit
    3. Is fragile due to retinal detachment

    #1 THE HUMAN EYE IS INVERTED

    The single most famous critique is, of course, the backward wiring of the retina. An optimal sensor should use its entire surface area for data collection, right? The vertebrate eye requires obstruction of the eye-path by axons and capillaries before it hits the photoreceptors.

    Take the cephalopod eye: it has an everted retina, the photo receptors face the light and the nerves are behind them meaning there is no need for a blind spot. The human reversed wiring represents a mere local (rather than global) maximum where the eye could only optimize so far due to its evolutionary history.

    Yet, this argument misses non-negotiable constraints. There is a metabolic necessity for the human eye which doesn’t exist in the squid or octopus.

    Photoreceptors (the rods and cones) have the highest metabolic rate of any cell in the body. They generate extreme heat and oxygen levels and undergo constant repair from constant reaction from photons. The energy demand is massive. This is an issue of thermoregulation, not just optics.

    The reason this is important is because the vertebrate eye is structured with an inverted retina precisely for the survival and longevity of these high-energy photoreceptors. These cells require massive, continuous nutrient and oxygen delivery, and rapid waste removal.

    The current inverted orientation is the only geometric configuration that allows the photoreceptors to be placed in direct contact with the Retinal Pigment Epithelium (RPE) and the choroid. The choroid, a vascular layer, serves as the cooling system and high-volume nutrient source, similar to a cooling unit directly attached to a high-performance processor.

    If the retina were wired forward, the neural cabling would form a barrier, blocking the connection between the photoreceptors and the choroid. This would inevitably lead to nutrient starvation and thermal damage. Not only that, but human photoreceptors constantly shed toxic outer segments due to damage, which must be removed via phagocytosis by the RPE. The eye needs the tips of the photoreceptors to be physically embedded in the RPE. 

    If the nerve fibers were placed in front they would form a barrier, preventing waste removal. This specific geometry is a geometric imperative for long-term molecular recycling and allows for eyes that last for 80+ years on the regular.

    Critics often insist however that even given the neural and capillary layers being necessary for metabolism, it is still a poor design because they block or scatter incoming light. 

    Yet, research has demonstrated that Müller glial cells span the thickness of the retina and act as essentially living fiber-optic cables. These cells possess a higher refractive index than the surrounding tissue, which gives them the capability to channel light directly to the cones with minimal scattering.

    So this criticism actually goes from being a poor design choice into an awesome low-pass filter which improves the signal-to-noise ratio and visual acuity of the human eye.

    But wait, there’s more! The neural layers contain yellow pigments (lutein and zeaxanthin) which absorb excess blue and ultraviolet light that is highly phototoxic! This layer is basically a forcefield against harmful rays (photo-oxidative damage) which extends the lifespan of these super delicate sensors.

    #2 THE HUMAN EYE HAS A BLIND SPOT

    However, the skeptics will still push back (which leads to point number 2): But surely a good design would not include a blind spot where the optic nerve runs through! And indeed this point is a fairly powerful one at a glance. But on further inspection, we see that this exit point, where literally millions of nerve fibers bundle together to pass the photoreceptors, is an example of optimized routing and not a critical flaw of any kind.

    This is true for many reasons. For one, by having the nerves bundle into this reinforced exit point, in this way, maximized the structural robustness of the remaining retina. Basically, if it were not this way, and the nerve fibers exited individually or even in small clusters across the retina, it would radically lower the integrity of the whole design. It would make the retina prone to tearing during rapid eye movements (saccades). In other words, we wouldn’t be getting much REM sleep! That, but also, we’d be missing out on most looking around of any kind.

    I’d say, even if that was the only advantage, the loss of a tiny fraction of our visual field is worth the trade-off.

    Second, and this is important, the blind spot is functionally irrelevant. What do I mean by that? I mean that humans were designed with two eyes for the purpose of seeing depth-of-field, i.e., understanding where things are in space. You can’t do that with one eye, so that’s not an option. With two eyes, the functional retina of the left eye covers the blind spot of the right eye, and vice versa. There is no problem in this design if both the vision is covered and depth-of-field are covered 100% accurately: which they are.

    Third, the optic disc is also used for integrated signal processing, containing melanopsin-driven cells that calibrate brightness perception for the entire eye, using the exit cable as a sensor probe. That means that the nerves also detect brightness and run the logistics in a localized region which is incredibly efficient.

    #3 THE HUMAN EYE IS VULNERABLE

    That is, the vulnerability specifically refers to retinal detachment. That is when the neural retina separates from the RPE. Why does this happen? It is a consequence of the retina being held loosely against the choroid, largely by hydrostatic pressure. Critics call this a failure point. Wouldn’t a good design be one where the RPE is solidly in place, especially if it needs to be connected to the retina? Well… no, not remotely.

    The RPE must actively transport massive amounts of fluid (approximately 10 liters per day) out of the subretinal space to the choroid to prevent edema (swelling) and maintain clear vision. A mechanically fused retina would impede this rapid fluid transport and waste exchange. Basically, the critics offer a solution which is really a non-solution. There is no possible way the eye could function at all by the means they suggest as the alternative “superior” version.

    So, what have we learned?

    The human eye is not a collection of accidents, but a masterpiece of constrained optimization. When the entire system (eye and brain) is evaluated, the result is astonishing performance. The eye achieves resolution at the diffraction limit (the theoretical physical limit imposed by the wave nature of light!) at the fovea, meaning it is hitting the maximum acuity possible for an aperture of its size.

    The arguments that the eye is “sub-optimal” often rely on comparing it to the structurally simpler cephalopod eye. Yet, cephalopod eyes lack trichromatic vision (they don’t see color like we do), have lower acuity (on the scale of hundreds of times worse clarity), and only function for a lifespan of 1–2 years (whereas the human eye must self-repair and maintain high performance for eight decades). The eye’s complexity—the Müller cells, the foveal pit, and the inverted architecture—are the necessary subsystems required to achieve this maximal performance within the constraints of vertebrate biology and physics.

    That’s not even getting to things like mitochondrial microlens in our cells which are essential for processing light. Recent research suggests that mitochondria in cone photoreceptors may actually function as micro-lenses to concentrate light, adding another layer of optical optimization. Optimization which would need to be there, perhaps a lot earlier than even the reversed lens structure.

    The fact that the eye is so optimal still remains, despite the critics best attempts at thwarting it. Therefore, the question remains, how could something so optimized evolve by random chance mutation, as well as so early and often in the history of biota?

  • Specious Extrapolations in Origin of Species

    Specious Extrapolations in Origin of Species

    In The Origin of Species, Darwin outlines evidence against the contemporary notion of species fixity, i.e., the idea that species represent immovable boundaries. He first uses the concepts of variations alongside his introduced mechanism of natural selection to create a plausible case for not merely variations, breeds, or races of organisms, but indeed species as commonly descended. Then, in chapter 4, after introducing a taxonomic tree as a picture of biota diversification, he writes, 

    “I see no reason to limit the process of modification, as now explained, to the formation of genera alone.”

    This sentence encapsulates the theoretical move that introduced the concept of universal common ancestry as a permissible and presently accepted scientific model. There is much to discuss regarding the arguments and warrants of the modern debate; however, let us take Darwin on his own terms. In those premier paragraphs of his seminal work, was Darwin’s extrapolation merited? Do the mechanisms and the evidence put forth for them bring us to this inevitable conclusion, or perhaps is the argument yet inconclusive? In this essay, we will argue that, while Darwin’s analogical reasoning was ingenious, his reliance on uniformitarianism and nominalism may render his extrapolation less secure than it first appears.

    In order to explain this, one must first understand the logical progression Darwin must follow. There are apparently three major assumptions—or premises. These are (1) analogism–artificial selection is analogous to natural selection, (2) uniformitarianism–variation is a mostly consistent and uniform process through biological time, and (3) nominalism–all variations and, therefore, all forms, vary by degree only and not kind. Here, we use ‘nominalism’ in the sense that species categories reflect human classification rather than intrinsic natural divisions, a position Darwin implicitly adopts.

    Of his three assumptions, one shows itself to be particularly strong—that of analogism. He spends most of the first four chapters defending this premise from multiple angles. He goes into detail on the powers of artificial selection in chapter one. His detail helps us identify which particular aspect of artificial selection leads to the observed robustness and fitness within its newly delineated populations. For this, he highlights mild selection over a long time. While one can see a drastic change in quick selection, this type of selection is less sustainable. It offers a narrower range of variable options (as variations take time to emerge).

    However, even with this carefully developed premise, let us not overlook its flaws. Notice that the evidence for the power of long-term selection is said to show that it brings about more robust or larger changes within some organisms in at least some environments. However, what evidence does Darwin present to demonstrate this case?

    Darwin does not provide a formal, quantifiable, long-term experiment to demonstrate the superiority of mild, long-term selection. Instead, he relies on descriptive, historical examples from breeders’ practices and then uses a logical argument based on the nature of variation. Thus, Darwin’s appeal demonstrates plausibility, not proof. This is an important distinction if one is to treat natural selection as a mechanism of universal transformation rather than limited adaptation.

    Even still, the extrapolation of differential selection and the environment’s role in that is not egregiously contentious or strange. Moreover, perhaps surprisingly, the assumption of analogism seems to be the most mutable extrapolation. The processes which stand in more doubt are Uniformitarianism and Nominalism (which will be the issue of the rest of this essay). The assumptions of uniformitarianism and nominalism undergird Darwin’s broader inference. When formalized, they resemble the following abductive arguments:

    Argument from Persistent Variation and Selection:

    Premise 1: If the mechanisms of variation and natural selection are persistent through time, then we can infer universal common descent.

    Premise 2: The mechanisms of variation and natural selection are persistent through time.

    Conclusion: Therefore, we can infer universal common descent.

    Argument from Difference in Degree:

    Premise 1: If all life differs only by degree and not kind, then we can infer that variation is a sufficient process to create all modern forms of life.

    Premise 2: All life differs only by degree and not kind,

    Conclusion: Therefore, we infer that variation is a sufficient process to create all modern forms of life.

    From these inferential conclusions, we see the importance of the two final assumptions as a fountainhead of the stream of Darwinian theory. 

    Before moving on, a few disclaimers are in order. It is worth noting that both arguments are contingent on the assumption that biology has existed throughout long geological time scales, but that is to be put aside for now. Notice we are now implicitly granting the assumption of analogism, and this imported doctrine is, likewise, essential to any common descent arguments. Finally, it is also worth clarifying that Darwin’s repeated insistence that ‘no line of demarcation can be drawn’ between varieties and species exemplifies the nominalist premise on which this argument from degree depends.

    To test these assumptions and determine whether they are as plausible as Darwin takes them to be, we first need to examine their constituent evidence and whether they provide empirical or logical support for Darwin’s thesis.

    The uniformitarian view can be presented in several ways. For Darwin, the view was the lens through which he saw biology, based on the Principles of Geology as articulated by Charles Lyell. Overall, it is not a poor inferential standard by any means. There are, however, certain caveats that limit its relevance in any science. Essentially, the mechanism in question must be precisely known, in that what X can do is never extrapolated into what X cannot do as part of its explanatory power. 

    How Darwin frames the matter is to say, “I observe X happening at small scales, therefore X can accumulate indefinitely.” This is not inherently incorrect or poor science in and of itself. However, one might ask: if one does not know the specific mechanisms involved in this variation process, is it really plausible to extrapolate these unknown variables far into the past or the future? Without knowing how variation actually works (no Mendelian genetics, no understanding of heredity’s material basis), Darwin is in a conundrum. He cannot justify the assumption that variation is unlimited if he cannot explain what it would even mean for that proposition to be true across deep time. It is like measuring the Mississippi’s sediment deposition rate, as was done for over 170 years, and extrapolating it back in time, when the river spanned the Gulf of Mexico. Alternatively, it is like measuring the processes of water erosion along the White Cliffs of Dover and extrapolating back in time until it reaches the European continent. In the first case, there is an apparent flaw in assuming constant deposition rates. In the second case, it is evident that water alone could not have caused the original break between England and France.

    It is the latter issue that is of deep concern here. There are too many unknowns in this equation to make it remotely scientific. It is not true that observing a phenomenon consistently requires understanding its mechanisms to extrapolate. However, Darwin’s theory is historical in a way that gravity, disease, or early mechanistic explanations were not. It cannot be immediately tested. Darwin, at best, leaves us to do the bulk of the grunt work after indulging in what can only be called guesswork.

    Darwin’s second line of reasoning to reach the universal common ancestry thesis relies heavily on a philosophical view of reality: nominalism. For nominalism to be correct, all traits and features would need to be quantitatively different (longer/shorter, harder/softer, heavier/lighter, rougher/smoother) without any that are qualitatively different (light/dark, solid/liquid/gas, color/sound, circle/square). In order to determine whether biology contains quality distinctions, we must understand how and in what way kinds become differentiable.

    The best polemical examples of discrete things, which differ more than just in degree, are colors. Colors could be hard to pin down on occasion. Darwin would have an easy time, as he did with species and variation taxonomical discourse, pointing out the divisive groups of thought in the classification of colors. Intuitively, there is a straightforward flow of some red to some blue. Even if they are mostly distinguishable, is not that cloud or wash of in-betweens enough to question the whole enterprise of genuine or authentic categories?

    However, moving from blue to yellow is not just an increase or decrease in something; it is a change to an entirely new color identity. It is a new form. The perceptual experience of blue is qualitatively different from the perceptual experience of yellow. Meaning they affect the viewer in particular and different ways. Hues, specifically, are indeed highly differentiated and are clear species within the genus of color. An artist mixing blue and yellow to create green does not thereby prove that blue and yellow are not real, distinct colors—only that intermediates are possible. Likewise, it is no business of the taxonomer, which calls some species and others variations, to negate the realness of any of these separate groups and count them as arbitrary and nominal. If colors—which exist on a continuous spectrum of wavelengths—still exhibit qualitative differences, then Darwin’s assumption that ALL biological features exist only on quantitative gradients becomes questionable.

    However, Darwin has done this very thing, representing different kinds of structures with different developmental origins and functional architectures as a mere spectrum with no distinct affections or purposes. Darwin needs variation to be infinitely plastic, but what does he say to real biological constraints? Is it ever hard to tell the difference between a plant and an animal? A beak from fangs? A feather from fur? A nail from a claw? A leaf from a pine needle? What if body plans have inherent organizational logic that resists certain transformations? He is treating organisms like clay that can be molded into any form, but what if they are more like architectural structures with load-bearing walls? Darwin is missing good answers to these concerns. All of which need answers in order to call the Argument from Difference in Degree sound or convincing. 

    This critique does not diminish Darwin’s achievement in proposing a naturalistic mechanism for adaptation. Instead, it highlights the philosophical assumptions embedded in his leap from observable variation to universal common descent. Assumptions that, in 1859, lacked the mechanistic grounding that would make such extrapolation scientifically secure.

  • The Five Major Challenges To Hume’s Skepticism

    The Five Major Challenges To Hume’s Skepticism

    In David Hume’s book A Treatise of Human Nature, he constructs what he calls the science of man. One cannot rightly understand any other species of science before this foundational science. The most radical and paradigm-shifting realization, for Hume, is that if all that exists are impressions and ideas, there are no grounds to truly justify putting any two impressions together causally, no matter how we might be inclined or disposed to do so, either by vulgar habit or through any rational means. This profound insight — that impressions are singular moments of a particular feeling with no relation except that of imagination — forced philosophers (including critics such as Reid) to deeply re-evaluate theories of knowledge acquisition and general epistemic concerns.

    Reid says this in his dedication for An Inquiry into the Human Mind, “His reasoning appeared to me to be just: there was therefore a necessity to call in question the principles upon which it was founded, or to admit the conclusions.” However, there are more reasons than the mere founding principles to reject Hume’s rationale. Drawing on a recent and rigorous debate, here are the five major critiques that make me skeptical of Hume’s skeptical conclusions.

    1. Circular Reasoning (The Problem of Induction)

    Hume uses causal reasoning (observing past regularities and inferring principles about human nature) to undermine the rational basis of causal reasoning. Suppose Hume justifies the separation of cause and correlation from experience, and he uses the distinction to describe and also argue against cause-and-effect as existing outside the mind (outside a relation/idea). In that case, he is making a circular argument. The implications of this circular reasoning are profound, as it challenges the very basis of our understanding of cause and effect. If belief in necessary connection is understood apart from reason, then there is equally no reason to undermine causal reasoning. The basis for an essential connection is reason and logical deduction. Thus, we can infer it from particular impressions, or it is not, and thus we can infer it based on specific impressions. Nothing falls on his skeptical rebuttal. You cannot easily conceive of a cause without an effect, any more than a premise without a conclusion.

    2. The Self-Refutation of Assertion and Communication

    The fact that Hume is making an argument refutes his point entirely. On what grounds can Hume either 1. make a distinction between kinds of necessity or 2. place either relations or matters of fact squarely into one category? Unthinkable things are equivalent to non-existent things, according to Hume. Therefore, you cannot make claims about external reality with reference to non-existent concepts. Even concepts of the imagination must exist by virtue of real impressions that have newly associated connections. Where are the impressions for a law such as non-contradiction?

    Hume believes we cannot know a table exists, so this is not simply descriptive. His outward attempts to convince others, and the fact that he has followers who support his theory, testify against him. Psychological interpretations of reality are false simply because meaning exists apart from the mechanical goings-on of the mind, and that meaning is communicable. The very fact that Hume is articulating his theory indicates such. Even a phenomenological view is better than psychologism.

    3. The Ad Hoc Assumption of External Existence

    Hume asks for the impression that gives rise to the idea of continuation and external existence separate from our perception, but where does he get the idea of continuation and external existence in the first place? If everything is sense impressions, how is he arguing against anything contrary to sense impressions? This is all very ad hoc. Calling concepts fabrications of the imagination and such. Does he not realize that by doing so, he’s condemning his very principles, which allowed him to condemn continuation and external existence?

    4. The Active Nature of Impressions, Not Raw Data

    There is also another popular critique of Hume. That is the notion of the tree falling in the woods. The tree falls without making a sound. A sound is something that can only be heard. The point being, Hume’s impressions already imply cause-and-effect before they are even interpreted or registered. Here is another thing. If two people hear a recording of an orchestra, but one of them has finely tuned ears for orchestration while the other does not, then, on first glance, the one with finely tuned ears will hear the counter-melody played on the violin. The one that does not is not surprising. However, Hume would have to acknowledge this as an impression reflected, interpreted by relation (all of which in a near-instant), yet that implies a higher acuity has been granted to the one in the realm of a particular sense. If sense is raw data, and therefore something that you receive and not create, it stands to reason that you should not be able to improve in the tacit reception of raw data. This analogy highlights the inherent contradictions in Hume’s argument, suggesting that our senses are not passive receptors of information but active interpreters that can improve over time.

    5. The Flawed Equivalence of Conceivability and Possibility

    A rigorous philosophical objection to Hume’s conclusion on necessity centers on his premise that what is conceivable is logically possible. Hume argues that because we can conceive of a cause without its usual effect (e.g., imagining the sun not rising) without contradiction, the necessary connection is not a truth of reason, but of habit. However, this conflates a psychological possibility (what we can imagine) with a metaphysical possibility (what could actually happen in reality). Contemporary critics argue that our inability to conceive of a contradiction in a causal break may reflect our epistemic limitations —our ignorance of deep, non-obvious natural laws —rather than a statement about the world itself. Therefore, the supposed “freedom” of the imagination that underpins his skepticism is merely a function of our ignorance of actual natural necessity, and his argument fails to prove that the necessity is truly absent from the objects themselves.

  • The Agnostic Nature of Phylogenetic Trees and Nested Hierarchies

    The Agnostic Nature of Phylogenetic Trees and Nested Hierarchies

    The evidence typically presented as definitive proof for the theory of common descent, the nested hierarchy of life and genetic/trait similarities, is fundamentally agnostic. This is because evolutionary theory, in its broad explanatory power, can be adapted to account for virtually any observed biological pattern post-hoc, thereby undermining the claim that these patterns represent unique or strong predictions of common descent over alternative models, such as common design.

    I. The Problematic Nature of “Prediction” in Evolutionary Biology

    1. Strict Definition of Scientific Prediction: A true scientific prediction involves foretelling a specific, unobserved phenomenon before its discovery. It is not merely explaining an existing observation or broadly expecting a general outcome.
    2. Absence of Specific Molecular Predictions:
      • Prior to the molecular biology revolution (pre-1950s/1960s), no scientist explicitly predicted the specific molecular similarity of DNA sequences across diverse organisms, the precise double-helix structure, or the near-universal genetic code. These were empirical discoveries, not pre-existing predictions.
      • Evolutionary explanations for these molecular phenomena (e.g., the “frozen accident” hypothesis for the universal genetic code) were formulated after the observations were made, rendering them post-hoc explanations rather than predictive triumphs.
      • Interpreting broad conceptual statements from earlier evolutionary thinkers (like Darwin’s “one primordial form”) as specific molecular predictions is an act of “eisegesis”—reading meaning into the text—rather than drawing direct, testable predictions from it. A primordial form does not necessitate universal code, universal protein sequences, universal logic, or universal similarity.

    II. The Agnosticism of the Nested Hierarchy

    1. The Nested Hierarchy as an Abstract Pattern: The observation that life can be organized into a nested hierarchy (groups within groups, e.g., species within genera, genera within families) is an abstract pattern of classification. This pattern existed and was recognized (e.g., by Linnaeus) long before Darwin’s theory of common descent.
    2. Compatibility with Common Design: A designer could, for various good reasons (e.g., efficiency, aesthetic coherence, reusability of components, comprehensibility), choose to create life forms that naturally fall into a nested hierarchical arrangement. Therefore, the mere existence of this abstract pattern does not uniquely or preferentially support common descent over a common design model.
    3. Irrelevance of Molecular “Details” for this Specific Point: While specific molecular “details” (such as shared pseudogenes, endogenous retroviruses, or chromosomal fusions) are often cited as evidence for common descent, these are arguments about the mechanisms or specific content of the nested hierarchy. These are not agnostic and can be debated fruitfully. However, they do not negate the fundamental point that the abstract pattern of nestedness itself remains agnostic, as it could be produced by either common descent or common design.

    III. Evolutionary Theory’s Excessive Explanatory Flexibility (Post-Hoc Rationalization)

    1. Fallacy of Affirming the Consequent: The logical structure “If evolutionary theory (Y) is true, then observation (X) is expected” does not logically imply “If observation (X) is true, then evolutionary theory (Y) must be true,” especially if the theory is so flexible that it can explain almost any X.
    2. Capacity to Account for Contradictory or Diverse Outcomes:
      • Genetic Similarity: Evolutionary theory could equally well account for a model with no significant genetic similarity between organisms (e.g., if different biochemical pathways or environmental solutions were randomly achieved, or if genetic signals blurred too quickly over time). For example, a world with extreme porportions of horizontal gene transfer (as seen in prokaryotic and rare eukaryotic cells)
      • Phylogenetic Branching: The theory is flexible enough to account for virtually any observed phylogenetic branching pattern. If, for instance, humans were found to be more genetically aligned with pigs than with chimpanzees, evolutionary theory would simply construct a different tree and provide a new narrative of common ancestry. This flexability puts a wedge in any measure of predictability claimed by the theory.
      • “Noise” in Data: If genetic data were truly “noise” (random and unpatterned), evolutionary theory could still rationalize this by asserting that “no creator would design that way, and randomness fully accounts for it,” thus always providing an explanation regardless of the pattern. In fact, a noise pattern is perhaps one of the few patterns better explained by random physical processes. Why would a designer, who has intentionality, create in such a slapdash way?
      • Convergence vs. Divergence: The theory’s ability to explain both convergent evolution (morphological similarity without close genetic relatedness) and divergent evolution (genetic differences leading to distinct forms) should imediately signal red-flags, as this is a telltale sign of a post-hoc fitting of observations rather than a result of specific prediction.
        • To illustrate this point, Let’s imagine we have seven distinct traits (A, B, C, D, E, F, G) and five hypothetical populations of creatures (P1-P5), each possessing a unique combination of these traits. For example, P1 has {A, B, C}, P2 has {A, D, E}, P3 has {A, F, G}, P4 has {B, D, F}, and P5 has {E, G}. When examining this distribution, we can construct a plausible “evolutionary story.” Trait ‘A’, present in P1, P2, and P3, could be identified as a broadly ancestral trait. P1 might be an early branch retaining traits B and C, while P2 and P3 diversified by gaining D/E and F/G respectively.
        • However, the pattern becomes more complex with populations like P4 and P5. P4’s mix of traits {B, D, F} suggests it shares B with P1, D with P2, and F with P3. An evolutionary narrative would then employ concepts like trait loss (e.g., B being lost in P2/P3/P5’s lineage), convergent evolution (e.g., F evolving independently in P4 and P3), or complex branching patterns. Similarly, P5’s {E, G} would be explained by inheriting E from P2 and G from P3, while also undergoing significant trait loss (A, B, C, D, F).
        • And this is the crux of the argument, given any observed distribution of traits, evolutionary theory’s flexible set of explanatory mechanisms—including common ancestry, trait gain, trait loss, and convergence—can always construct a coherent historical narrative. This ability to fit diverse patterns post-hoc renders the mere existence of a nested hierarchy, disconnected from specific underlying molecular details, as agnostic evidence for common descent over other models like common design.

    IV. Challenges to Specific Evolutionary Explanations and Assumptions

    1. Conservation of the Genetic Code:
      • The claim that the genetic code must remain highly conserved post-LUCA due to “catastrophic fitness consequences” of change is an unsubstantiated assumption. Granted, it could be true, but one can imagine plausible scenarios which could demonstrate exceptions.
      • Further, evolutionary theory already postulates radical changes, including the very emergence of complex systems “from scratch” during abiogenesis. If such fundamental transformations are possible, then the notion that a “new style of codon” is impossible over billions of years, even via incremental “patches and updates,” appears inconsistent.
      • Laboratory experiments that successfully engineer organisms to incorporate unnatural amino acids demonstrate the inherent malleability of the genetic code. Yet no experiment has demonstrate abiogenesis, a much more implausible event with less evolutionary time to play with. Why limit the permissible improbable things arbitrarily?
      • There is no inherent evolutionary reason to expect a single, highly conserved “language” for the genetic code; if information can be created through evolutionary processes, then multiple distinct solutions should be the rule.
    2. Functionality of “Junk” DNA and Shared Imperfections:
      • The assertion that elements like pseudogenes and endogenous retroviruses (ERVs) are “non-functional” or “mistakes” is often an “argument from ignorance” or an “anti-God/atheism-of-the-gaps” fallacy. Much of the genome’s function is still unknown, and many supposedly “non-functional” elements are increasingly found to have regulatory or other biological roles. For instance, see my last article on the DDX11L2 “pseudo” gene which operates as a regulatory element including as a secondary promoter.
      • If these elements are functional, their homologous locations are easily explained by a common design model, where a designer reuses functional components across different creations.
      • The “functionality” of ERVs, for instance, is often downplayed in arguments for common descent, despite their known roles in embryonic development, antiviral defense, and regulation, thereby subtly shifting the goalposts of the argument.
    3. Probabilities of Gene Duplication and Fusion:
      • The probability assigned to beneficial gene duplications and fusions (which are crucial for creating new genetic information and structures) seems inconsistently high when compared to the low probability assigned to the evolution of new codon styles. If random copying errors can create functional whole genes or fusions, then the “impossibility” of a new codon style seems a little arbitrary.

    Conclusion:

    The overarching argument is that while common descent can certainly explain the observed patterns in biology, its explanatory power often relies on post-hoc rationalization and a flexibility that allows it to account for almost any outcome. This diminishes the distinctiveness and predictive strength of the evidence, leaving it ultimately agnostic when compared to alternative models that can also account for the same observations through different underlying mechanisms.

  • The Pagan Can Be Saved?

    The Pagan Can Be Saved?

    Wesley Coleman

    In Søren Kierkegaard’s Concluding Unscientific Postscript to Philosophical Fragments, Johannes Climacus breaks down notions, based on objective and speculative interpretations, of Christianity, arguing instead that authentic religious truth is fundamentally subjective. As exemplified in his assertion on page 201 regarding truth in prayer, Climacus posits that the manner of an individual’s infinite, passionate relation to the eternal—even in the face of objective uncertainty or perceived untruth—is paramount, superseding intellectual assent to dogma or historical fact and revealing the inherent limitations of any detached, disinterested approach to faith. This stance foregrounds the lived reality of faith as a personal, strenuous endeavor, fundamentally separate from and perhaps at odds with objective inquiry.

    Kierkegaard, through Climacus, opens the Postscript by challenging what he identifies as problematic approaches to understanding Christianity: the historical, the speculative, and the superficial religiousness prevalent in his time. From the very start, Kierkegaard has separated the objective issue of the truth of Christianity from the subjective issue of the subjective individual’s relation to the truth of Christianity (Kierkegaard 22). Climacus contends that the objective point of view, whether focusing on historical or philosophical truth, is inherently flawed when applied to Christianity. An objective inquiry is characterized as “disinterested,” seeking to establish truth through critical consideration of reports or the relation of doctrine to eternal truth. However, for an individual concerned with their eternal happiness, historical certainty, being merely an “approximation,” is profoundly insufficient. This is because “an approximation is too little to build his happiness on and is so unlike an eternal happiness that no result can ensue” (Kierkegaard 22). The scholarly pursuit, while commendable in its erudition, ultimately “distracts” from the issue of an individual’s faith (Kierkegaard 14) and “suppresses” the vital dialectical clarity required for true understanding (Kierkegaard 11).

    The fundamental problem with objectivity, as Climacus elaborates, is its inherent detachment from the individual’s existence. The “objective subject” is too “modest” and “immodest” to include himself in the inquiry; he is interested but “not infinitely, personally, impassionedly interested in his relation to this truth concerning his own eternal happiness” (Kierkegaard 22). This detachment leads to a comical self-deception: “Precisely this is the basis of the scholar’s elevated calm and the parroter’s comical thoughtlessness” (Kierkegaard 22). Christianity, Climacus asserts, is spirit; spirit is inwardness; inwardness is subjectivity; subjectivity is essentially passion, and at its maximum an infinite, personally interested passion for one’s eternal happiness. Therefore, as soon as subjectivity is taken away, and passion from subjectivity, and infinite interest from passion, there is no decision whatsoever. The objective approach, by sacrificing this infinite, personal, impassioned interestedness, paradoxically makes one too objective to have eternal happiness. The speculative point of view fares no better, attempting to permeate Christianity with thought and and make it eternal thought. Yet, if Christianity is truly subjectivity, a matter of inward deepening, then objective indifference cannot come to know anything whatsoever. Like is understood only by like; thus, the knower must be in the requisite state of infinite, passionate interest. Speculative thought, in its objectivity, is “totally indifferent to his and my and your eternal happiness” (Kierkegaard 55), making its “happiness” an illusion as it attempts to be “exclusively eternal within time” (Kierkegaard 56).

    This critique of objective and speculative approaches, which Climacus gradually unfolds finally builds to a climax on page 201 with the passage at hand to be dealt with. The chapter titled “Subjective Truth, Inwardness; Truth Is Subjectivity” in Part Two directly introduces the core concept that “truth becomes appropriation, inwardness, subjectivity, and the point is to immerse oneself, existing, in subjectivity” (Kierkegaard 192). Climacus establishes that for an existing person, “the question about truth persists” not as an abstract definition, but as something to “exist in” (Kierkegaard 191). He dismisses mediation and the abstract “subject-object” as reverting to abstraction (Kierkegaard 192), emphasizing that “an existing person cannot be in two places at the same time, cannot be subject-object” (Kierkegaard 199). The “I-I” is explicitly called a “mathematical point that does not exist at all” (Kierkegaard 197), making it clear, for Climacus, that it is an impossibility for an existing human being to transcend their individual, passionate existence and achieve this abstract oneness. For Climacus, “only ethical and ethical-religious knowing is essential knowing” (Kierkegaard 198), and such knowing is always essentially related to the knower’s own existence.

    The critical distinction, immediately preceding the paragraph in question, is articulated as: “When the question about truth is asked objectively, truth is reflected upon objectively as an object to which the knower relates himself…When the question about truth is asked subjectively, the individual’s relation is reflected upon subjectively. If only the how of this relation is in truth, the individual is in truth, even if he in this way were to relate himself to untruth” (Kierkegaard 199). This prioritizes the mode of relation over the object of relation in its abstracted form separate from engagement.

    Then, the force of Climacus’s argument is finally catalyzed. He starts with an aggressive remark, “now, if the problem is to calculate where there is more truth…then there can be no doubt about the answer for anyone who is not totally botched by scholarship and science” (Kierkegaard 201). The harsh remark is true, it is intuitive for all those not steeped in abstraction. Those who are incapable of grasping the truth are those which have been immersed in a harmful way of thinking, and Climacus’s words are meant to provoke that truth. The phrase “botched by scholarship and science” in particular is reminiscent of the “infinite, personal, impassioned interestedness” which exists in the person practicing the objective issue (Kierkegaard 27).

    Climacus then explicitly rules out any notion of a neutral, balanced approach: “(and, as stated, simultaneously to be on both sides equally is not granted to an existing person but is only a beatifying delusion for a deluded I-I)” (Kierkegaard 201). This re-emphasizes that an existing human being cannot inhabit the abstract “subject-object” or “I-I,” which is a phantom of pure thought (Kierkegaard 192). To attempt such a mediation between objective and subjective approaches is a “delusion,” a fantastical escape from the concrete reality of existing. An existing person is always in a process of becoming (Kierkegaard 192), and this inherent motion precludes the static, all-encompassing view of the “I-I” (Kierkegaard 199).

    The core of the paragraph is the deep dichotomy presented: “whether on the side of the person who only objectively seeks the true God and the approximating truth of the God-idea or on the side of the person who is infinitely concerned that he in truth relate himself to God with the infinite passion of need” (Kierkegaard 201). The dichotomy is on one hand, “the true God” and “approximating truth of the God-idea” and on the other, “infinite passion of need.” The objective seeker remains stuck in approximate knowledge, which, as established earlier, is insufficient for eternal happiness. In contrast, the “infinite passion of need” signifies the highest subjectivity, where the individual’s “eternal happiness” is at stake. This passion brings true existential importance to the individual which is impossible through speculation.

    The paragraph then presents a provocative thought experiment: “If someone who lives in the midst of Christianity enters, with knowledge of the true idea of God, the house of God, the house of the true God, and prays, but prays in untruth, and if someone lives in an idolatrous land but prays with all the passion of infinity, although his eyes are resting upon the image of an idol—where, then, is there more truth?” (Kierkegaard 201). This scenario is incredibly hard for many who view Christianity as something true that one believes about God. This analogy turns that presumption on its head drawing a distinction between the “what” and the “how” of faith (Kierkegaard 199). The person who is a Christian by birth or culture or even intellectually “knows the true idea of God” and prays in the “house of the true God” (Kierkegaard 201) represents the objective approach that assumes faith is an afterthought and something that can be taken for granted. Such an individual may possess all the outward forms and correct doctrines, but their prayer is “in untruth” if it lacks the “infinite passion of inwardness” (Kierkegaard 201). This coincides with Climacus’s earlier assertion that objective Christianity is pagan (Kierkegaard 43), and to know a creed by rote is paganism, because Christianity is inwardness. Their knowledge, being disinterested, is merely a vanishing, unrecognizable atom of objective understanding, not transformative truth.

    Conversely, the individual in an “idolatrous land” who prays “with all the passion of infinity” to an idol, despite the objective untruth of the object, possesses “more truth” (Kierkegaard 201). The passion itself, the subjective “how” of their relation, is the determining factor. This is because the passion of the infinite is the very truth. Their worship, even of an objectively false god, carries the weight of authentic, boundless engagement.

    The conclusion of the paragraph drives the point home: “The one prays in truth to God although he is worshiping an idol; the other prays in untruth to the true God and is therefore in truth worshiping an idol” (Kierkegaard 201). This is not a relativistic dismissal of God’s objective existence, but a radical redefinition of what constitutes truth in the context of an individual’s religious life. The person who prays passionately to an idol is, in their inwardness, genuinely seeking the divine, and this “infinite passion of need” (Kierkegaard 201) creates a true “God-relation” (Kierkegaard 199). Their relation, despite the objective error, is in truth. This is, perhaps, a shocking revelation to the one who calls the heretic ‘unsaved’. Conversely, the person who prays to the true God without this infinite passion effectively turns the true God into an “idol”—an object of detached, intellectual assent rather than a living, transforming presence. This intellectual understanding without passionate inwardness is merely an illusion. It reduces the divine to an object for intellectual scrutiny, precisely what objective thought does to Christianity (Kierkegaard 52).

    Other possible interpretations of this passage, primarily objective or speculative, fail to grasp its radical thrust. An objective interpretation would likely focus on the factual untruth of idol worship, concluding that the idolater is in untruth regardless of their passion. This perspective, however, completely misses Climacus’s central argument that objective knowledge is “indifferent” to the knower’s existence and thus cannot engage with the truth of the infinite (Kierkegaard 193). For an objective approach, the truth is merely “an object to which the knower relates himself” (Kierkegaard 199), failing to recognize that “the individual’s relation is reflected upon subjectively” and the “how” is truth (Kierkegaard 199). This kind of detached, “disinterested” knowledge simply “distracts” from the issue of faith (Kierkegaard 28).

    A speculative interpretation might attempt to mediate between the two positions, arguing that the true understanding lies in a higher synthesis where both the object and the subjective relation are reconciled. However, Climacus explicitly rejects such mediation for an existing person, stating that to be in mediation is to be finished; to exist is to become. Speculative thought, in its quest for a “system” (Kierkegaard 14), “promises everything and keeps nothing at all” for the existing individual. It assumes a “presuppositionless” beginning and ultimately “dissolves into a make-believe” of understanding faith (Kierkegaard 14). By attempting to “explain and annul” the paradox, speculative thought implicitly “corrects” Christianity instead of explaining it. The absolute paradox, which is the eternal truth coming into existence in time, cannot be understood but only believed “against the understanding” (Kierkegaard 217). Any attempt to rationally encompass or explain it is “volatilization” and a return to paganism (Kierkegaard 217). The speculative thinker, in trying to become “objective” and “disappear from himself” (Kierkegaard 56), cannot grasp the existential truth of faith, which is grounded in passion and the “utmost exertion” of the existing self (Kierkegaard 55).

    Furthermore, the interpretation that reduces Christianity to a set of doctrines or a historical phenomenon, implicitly adopted by the “Christian in the midst of Christianity” who prays “in untruth” (Kierkegaard 201), is also rejected. Christianity is not a doctrine but a relational act. The relation to a doctrine is merely intellectual, whereas the relation to Christianity is one of faith, an infinite interestedness. To be a Christian by name only is a serious danger due to the fact that it removes the necessary “infinite passion” (Kierkegaard 16). Such individuals, by “praying in untruth” (Kierkegaard 201), effectively transform the true God into an “idol” (Kierkegaard 201), stripped of the demanding, transformative power that calls for infinite inwardness.

    In conclusion, the paragraph on page 201 profoundly encapsulates Climacus’s core thesis: Christianity’s truth is existentially actualized not through objective knowledge or speculative comprehension, but through the subjective individual’s absolute, infinite passion. This passion, born of an “infinite need” and held fast against “objective uncertainty” (Kierkegaard 203), is the very essence of faith, a “contradiction between the infinite passion of inwardness and the objective uncertainty” (Kierkegaard 204). The example of the passionate idolater versus the dispassionate Christian reveals that the intensity and truthfulness of the subjective relation far outweighs the objective accuracy of the object of worship when it comes to genuine religiousness. This radical emphasis on the “how” of faith over the “what” forces the reader to confront the demanding, terrifying, and deeply personal nature of becoming and being a Christian, a path that rejects the easy and fragmentary reassurances of objective certainty and speculative systems in favor of a lived, passionate existence with a holistic commitment. The radical conclusion that one can have objective error and be in real relationship with God. The radical conclusion that the pagan can be saved. Not because their idol is the true God, but because they have true faith.

    Climacus, Johannes. Concluding Unscientific Postscript to Philosophical Fragments. Edited and translated by Howard V. Hong and Edna H. Hong, Princeton UP, 1992.

  • The Nature of Society: Where We Stand as Individuals

    The Nature of Society: Where We Stand as Individuals

    From my perspective, society isn’t some grand, top-down invention or a purely artificial construct. Instead, it’s a natural outgrowth of human interaction, an organic creation. This organic origin gives society a fascinating, dualistic nature: it’s both a source of conflict and a fertile ground for cooperation, a necessary evil, and a crucial tool for individual flourishing. I see these seemingly opposing ideas not as separate or contradictory, but as deeply intertwined.

    The inherent conflict within society comes from the undeniable reality of human imperfection. As fallen creatures, individuals will always have competing interests, differing desires, and a natural lean toward self-interest and corruption. This doesn’t mean we’re in a constant state of overt warfare, but rather a perpetual tension over resources, values, and the direction we take as a collective. Yet, our natural inclination to interact also fosters cooperation. Things like specialization, security, the pursuit of knowledge, and companionship make a collective invaluable. Society, then, emerges from this very tension—the delicate balance between individual will and collective order.


    Our Place as Individuals in the Social Fabric

    An individual’s relationship to society is equally nuanced. In my view, the paramount command for each of us is to love our neighbor and orient our lives toward God. This core Christian ethical responsibility dictates an outward-looking concern for others, yet it critically anchors responsibility within our own sphere of influence. While the collective good is undeniably important and should be prioritized when we can genuinely affect change, our ultimate responsibility isn’t to the totality of society—what Dostoevsky called ‘general love of humanity’—but for what we can directly control: the self.

    This means cultivating personal virtue, making ethical choices in daily interactions, and contributing positively within our own communities. Society, in turn, has a duty to its members, but this duty is reciprocal. It flows from the recognition that individuals have responsibilities toward each other. It’s not a top-down benevolence, but a framework of mutual obligation.


    Understanding Freedom and Authority

    Freedom, in this context, isn’t absolute license. All freedom is either freedom from or freedom to. We should possess freedom from things that cause harm—whether it’s physical violence, coercive manipulation, or the unjust suppression of conscience. Equally, we should have the freedom to choose things that benefit us, to pursue our vocations, and to act on good impulses. Crucially, to exercise these freedoms, we must also be free to express our perceptions about what’s beneficial and harmful, and to act on the former while restricting the latter.

    Broader, or higher, societal authority should be clearly codified into law, discriminating against no one group. These laws should ideally be general rules of conduct, equally applicable to all, providing a predictable framework for individual action rather than dictating specific outcomes.

    This idea comes from a fundamental principle of governance, which I derive from thinkers like Hayek and Mill: broader authority—the state or collective institutions—should err on the side of fewer restrictions and regulations. Its role is to establish and enforce the rules of the game, not to direct the play itself. Conversely, narrower authority, extending to its most narrow point in the self, should err on the side of being too restricted. This means exercising personal moral discipline and self-governance.

    This plays out with a clear distinction: the king declares that murder is forbidden, establishing a universal legal boundary, while the individual forbids hate in his own heart, engaging in the continuous, internal struggle for virtue. The former creates external order; the latter cultivates internal righteousness. The moment this moral hierarchy is dismembered is likely the same moment society begins to decline.


    The Unending Struggle

    Human beings are fallen creatures, and none of this will ever play out as a utopian vision. We’re not so malleable, in a Marxist sense, that our nature can be entirely shaped by policy or environmental conditions; there are inherent tendencies and proclivities that resist perfect social engineering. Nor are humans so inherently good that they don’t tend toward corruption when power is consolidated or accountability is removed. While humans are capable of immense wonders, they are equally capable of great atrocities. It’s not wrong to call humanity bad in its fallen state, but to call us irredeemable would be antithetical to the Christian ethos that informs my worldview.

    The telos of man, our ultimate purpose, is to obey God’s commands. Ideally, institutions should facilitate that process, creating an environment conducive to moral flourishing. However, due to human imperfection and the inherent limitations of collective structures, institutions are, perhaps, not capable of reaching that ideal state in their earthly manifestation.

    In many ways, I identify strongly with Friedrich Hayek’s arguments in The Road to Serfdom. His critique of collectivist policies and central planning resonates with my understanding of human nature and the necessary boundaries of societal authority. Hayek meticulously demonstrates how attempts to centrally plan society toward specific, desirable ends, even with the best intentions, inevitably lead to a loss of individual liberty and an escalation of coercive power and totalitarianism. I maintain a tentative rule-of-law position while I wait for the Lawmaker.

    Further Reading:

    • Dostoevsky, Fyodor. The Brothers Karamazov
    • Hayek, F. A. The Road to Serfdom
    • Marx, Karl, and Engels, Friedrich. The Communist Manifesto
    • Mill, John Stuart. On Liberty
  • The Incoherence of Naturalism

    The Incoherence of Naturalism

    Introduction

    Naturalism—the philosophical position that reality consists entirely of natural entities governed by natural laws—presents itself as the most rational and empirically grounded worldview. Yet despite its scientific veneer, naturalism suffers from foundational incoherence that undermines its viability as a comprehensive philosophy.

    This critique demonstrates that naturalism is self-defeating, arbitrarily restrictive, explanatorily inadequate, and internally inconsistent. Each of these failings stems not from temporary limitations in scientific knowledge but from structural contradictions within naturalism itself. Together, they render naturalism philosophically untenable and point toward the necessity of a more pluralistic metaphysical framework.

    Premise 1: Self-Defeating Foundations

    Naturalism’s first fatal flaw lies in its inability to justify its own foundations without circularity or special pleading.

    Scientific inquiry rests on several non-empirical assumptions that cannot be empirically verified: the reliability of human reason, the uniformity of nature, the correspondence between our perceptions and external reality, and principles like logical consistency and parsimony. These assumptions cannot be proven through scientific methods—they are preconditions for scientific inquiry itself.

    This creates an insurmountable problem for naturalism. If reality consists entirely of natural entities governed by natural laws, then human cognition is merely the product of evolutionary processes that selected for survival value, not truth-tracking capacity. As philosopher Alvin Plantinga argues, if our cognitive faculties evolved primarily for reproductive fitness rather than truth detection, we have no reason to trust them for accurately grasping metaphysical truths like naturalism itself.

    The naturalist might counter that evolutionary adaptiveness correlates with truth-tracking, particularly regarding immediate environmental threats. But this defense fails to bridge the gap between adaptive perceptual reliability and justified abstract metaphysical beliefs. There is no evolutionary advantage to having accurate beliefs about quantum mechanics, consciousness, or cosmic origins. Natural selection has no mechanism to select for metaphysical accuracy.

    This creates what philosopher Thomas Nagel calls an “intolerable conflict” in naturalism—it relies on rational faculties that, by its own account, evolved for survival rather than metaphysical accuracy. The naturalist faces what Barry Stroud terms “irrecoverable circularity”: they must presuppose the reliability of faculties whose reliability they then try to explain through evolutionary processes.

    Even sophisticated attempts to escape this circularity through epistemic externalism merely shift the problem. Reliabilism claims beliefs formed through reliable processes are justified regardless of whether we can prove their reliability. But this begs the question: how do we establish which processes are reliable without presupposing their reliability? At some point, non-empirical axioms must be accepted on non-natural grounds.

    If naturalists retreat to pragmatism, accepting these axioms as “useful fictions” rather than truths, they have conceded that naturalism cannot justify its foundations within its own framework. This pragmatism is itself a non-empirical philosophical commitment that naturalism can neither justify nor dispense with.

    Premise 2: Arbitrary Restriction of Inquiry

    Naturalism’s second critical flaw lies in its arbitrary restriction of legitimate inquiry to natural causes alone.

    Philosophical naturalism makes an unwarranted leap from methodological naturalism (the practical scientific approach of seeking natural causes) to a metaphysical claim that only natural causes exist. This represents a category error—moving from a useful methodological heuristic to an ontological assertion without sufficient justification.

    By defining reality exclusively in terms of what natural science can study, naturalism creates a self-fulfilling prophecy: it finds only natural causes because it defines all discoverable causes as natural by definition. This circular approach prejudices investigation rather than allowing evidence to determine the boundaries of reality.

    The most powerful demonstration of this limitation is consciousness. Despite tremendous advances in neuroscience, the qualitative character of subjective experience—what philosopher Thomas Nagel calls the “what it is like” aspect of consciousness—resists reduction to physical processes. Neuroscience can correlate neural activity with reported experiences but cannot explain why these physical processes are accompanied by subjective experience at all.

    This limitation isn’t temporary but structural—scientific methods are designed to study third-person observable phenomena, not first-person subjectivity. The scientific method, by its very nature, abstracts away subjective qualities to focus on quantifiable properties. This creates what philosopher David Chalmers calls the “hard problem” of consciousness—explaining how and why physical processes give rise to subjective experience.

    Naturalists often respond by incorporating consciousness as a “fundamental” feature of an expanded natural framework—what Chalmers calls “naturalistic dualism.” But this semantic maneuver doesn’t resolve the ontological problem. If consciousness is fundamental and irreducible to physical processes, then reality includes non-physical properties—precisely what traditional naturalism denies. This exhibits what philosopher William Hasker calls “naturalism of the gaps”—expanding the definition of “natural” to encompass whatever resists reduction.

    Unlike historical examples like electromagnetism or vitalism, which were unexplained physical phenomena eventually incorporated into expanded physical frameworks, consciousness presents a categorically different challenge—explaining how subjective experience arises from objective processes. This isn’t merely an unexplained mechanism but a conceptual chasm between fundamentally different categories of reality.

    Premise 3: Explanatory Gaps

    Naturalism’s third major flaw lies in its persistent failure to explain fundamental aspects of human experience, despite centuries of scientific progress.

    Beyond consciousness, naturalism struggles to account for several phenomena central to human existence:

    Intentionality: The “aboutness” of mental states—the fact that thoughts, beliefs, and desires are about something beyond themselves—resists physical reduction. Physical states aren’t intrinsically “about” anything; they simply are. Yet our mental states exhibit this irreducible directedness toward objects, concepts, and possibilities. Philosopher Franz Brentano identified intentionality as the defining characteristic of mental phenomena, creating an explanatory gap that naturalism has failed to bridge.

    Rationality: Logical relationships between propositions aren’t physical connections but normative ones—they describe how we ought to reason, not merely how matter behaves. The laws of logic and mathematics exhibit a necessity that natural laws lack. Natural laws describe contingent regularities that could theoretically be otherwise; logical laws express necessary truths that couldn’t possibly be different. This modal difference creates another category distinction that naturalism struggles to accommodate.

    Morality: Moral imperatives involve inherent “ought” claims that cannot be derived from purely descriptive “is” statements. As philosopher G.E. Moore identified, any attempt to define moral properties in natural terms commits the “naturalistic fallacy.” Evolutionary accounts may explain the origins of moral psychology but cannot justify moral claims as true or authoritative. If moral judgments are merely evolutionary adaptations, their normative force is undermined, creating what philosopher Sharon Street calls the “Darwinian Dilemma.”

    Naturalists often respond to these gaps through eliminativism or emergentism. Eliminativism denies the reality of these phenomena, claiming they are illusions or folk-psychological confusions. But this approach is self-defeating—an illusion of consciousness must be experienced by someone, making consciousness inescapable. As philosopher John Searle notes, “Where consciousness is concerned, the appearance is the reality.”

    Emergentism fares no better. To claim consciousness “emerges” from physical processes without explaining the mechanism of emergence merely restates the mystery. Unlike other emergent properties (like liquidity emerging from H₂O molecules), consciousness involves a transition from objective processes to subjective experience—a categorical leap, not a continuous spectrum. The naturalist must explain how arrangement of non-conscious particles yields consciousness, a challenge philosopher Colin McGinn calls “cognitive closure.”

    These explanatory gaps aren’t temporary limitations in scientific knowledge but principled barriers arising from naturalism’s restricted ontology. After centuries of scientific progress, these gaps remain as profound as ever, suggesting a fundamental inadequacy in naturalism’s conceptual resources.

    Premise 4: Inconsistent Verification

    Naturalism’s fourth fatal flaw lies in its criterion for knowledge, which cannot justify itself without inconsistency.

    The naturalist privileges empirical verification—the idea that meaningful claims must be empirically testable. Yet this verification principle itself cannot be empirically verified. It is a philosophical position, not a scientific discovery. This creates an internal contradiction: if we accept only what can be demonstrated through scientific methods, we must reject the very principle that demands such verification.

    Even if naturalists reject strict verificationism, they still privilege empirical evidence above all else. Yet this privileging itself cannot be empirically justified. It’s a meta-empirical value judgment about what counts as legitimate evidence—precisely the kind of non-empirical philosophical commitment that naturalism struggles to account for.

    Attempts to resolve this inconsistency through naturalized epistemology (following Quine) don’t solve the problem—they institutionalize it. Treating epistemology as a branch of psychology assumes the reliability of the psychological methods used to study epistemology. This creates what philosopher Laurence BonJour calls “meta-justification”—how do we justify our justificatory framework? Naturalized epistemology ultimately relies on pragmatic success, but this pragmatism itself requires non-empirical criteria for what constitutes “success.”

    Even if we accept Quine’s web of belief, some strands in the web must be anchored independently of empirical verification. These include logical principles, mathematical truths, and the assumption that reality is comprehensible. These principles aren’t empirically derived but are preconditions for empirical inquiry. Their necessity reveals naturalism’s dependence on non-natural foundations.

    Naturalism thus faces an inescapable dilemma: either it consistently applies its verification standards and undermines its own foundations, or it makes special exceptions for its core principles and thereby acknowledges limits to its explanatory scope.

    The Inescapable Dilemma of Naturalism

    These four premises reveal that naturalism faces an inescapable dilemma:

    1. Strict naturalism maintains a coherent ontology (only physical entities exist) but fails to account for consciousness, intentionality, rationality, and its own foundations.
    2. Expanded naturalism accommodates these phenomena but sacrifices coherence by stretching “natural” to include fundamentally non-physical properties.

    This isn’t merely a limitation of current knowledge but a structural impossibility within naturalism’s framework. The problem isn’t that naturalism hasn’t yet explained consciousness; it’s that consciousness is categorically different from physical processes, requiring explanatory principles that transcend physical causation.

    A “richer naturalism” that embraces consciousness as fundamental, accepts non-empirical axioms pragmatically, and incorporates abstract objects has abandoned naturalism’s core thesis that reality consists entirely of natural entities subject to natural laws. This isn’t evolution of inquiry but conceptual surrender.

    Beyond Naturalism: The Case for Metaphysical Pluralism

    The most coherent alternative to naturalism is metaphysical pluralism—recognizing that reality includes physical processes, conscious experience, abstract entities, and normative truths, without reducing any to the others.

    This pluralistic approach acknowledges that different domains of reality require appropriate methods of investigation:

    1. Physical phenomena are best studied through empirical scientific methods
    2. Conscious experience requires phenomenological approaches that honor subjectivity
    3. Logical and mathematical truths demand rational analysis independent of empirical verification
    4. Normative questions involve philosophical reflection on values, not merely empirical facts

    Unlike naturalism, pluralism doesn’t face self-defeat (it can ground rational faculties non-circularly), doesn’t arbitrarily restrict inquiry (it allows appropriate methods for different domains), and doesn’t face explanatory gaps (it acknowledges irreducible categories without eliminating them).

    Naturalists often appeal to Ockham’s Razor (parsimony) and the practical success of science (pragmatism) as reasons to prefer naturalism over more metaphysically rich views like pluralism. However, as your text implicitly and explicitly argues, these critiques are problematic when leveled by the naturalist themselves, given the internal difficulties naturalism faces.

    1. Problems with the Parsimony Critique:

    • False Parsimony: Naturalism’s claim to parsimony often amounts to ontological stinginess achieved by explanatory inadequacy. It claims to be simpler by positing only one fundamental kind of “stuff” (natural/physical). However, as your text details, this simplicity is bought at the cost of being unable to adequately account for or integrate crucial aspects of reality like consciousness, intentionality, rationality, and normativity (Premises 2 & 3). A theory that is simple but leaves vast swathes of reality unexplained is not genuinely more parsimonious than a theory that posits more fundamental categories but can actually explain or accommodate all the relevant phenomena. True parsimony should be measured not just by the number of types of entities posited, but by the overall complexity of the explanatory framework required to account for the data. Pluralism, by assigning different phenomena to different appropriate categories, might require a more diverse ontology but arguably a less strained and more comprehensive explanatory structure than naturalism, which must resort to eliminativism, mysterious emergence, or redefining terms to handle outliers.
    • Redefining “Natural” Undermines Parsimony: Your text notes that naturalists trying to accommodate phenomena like consciousness might resort to calling it a “fundamental feature” within an “expanded naturalism” or “naturalistic dualism.” This is an attempt to absorb irreducible phenomena by broadening the definition of “natural.” But this move itself adds fundamental categories or properties to the naturalist ontology. If “natural” now includes irreducible subjective experience or fundamental abstract objects, the initial claim to radical simplicity (“only physical stuff”) is surrendered. This “naturalism of the gaps” (as your text puts it) demonstrates that naturalists, when pressed, do feel the need to add fundamental categories, thereby undermining their own parsimony argument against pluralism.
    • Parsimony Itself is a Non-Empirical Principle: Ockham’s Razor is a meta-scientific or philosophical principle guiding theory choice. It’s not something discovered through empirical science. As your text argues in Premise 4, naturalism struggles to justify such non-empirical principles within its own framework. If the naturalist insists that all legitimate knowledge must be empirically verifiable or grounded, they face a difficulty in appealing to a principle like parsimony, which is a criterion of theoretical virtue, not an empirical fact. Using parsimony to critique pluralism requires the naturalist to step outside their own purported empirical-only standard, or at least rely on a principle they cannot ground naturally.  

    2. Problems with the Pragmatism Critique:

    • Conflation of Methodological and Metaphysical Pragmatism: Naturalists often point to the undeniable success of science (which operates using methodological naturalism – seeking natural explanations within its domain) as evidence for metaphysical naturalism (the philosophical claim that only natural things exist). As your text argues in Premise 2, this is a category error. Methodological naturalism is pragmatic for the specific goal of studying the physical world empirically. Metaphysical naturalism is a comprehensive worldview claim. The pragmatism of the former doesn’t automatically transfer to the latter. Pluralism fully embraces methodological naturalism for understanding the physical realm but recognizes that other realms (subjective experience, logic, morality) require different, though equally valid, approaches.  
    • Pragmatism for What Purpose? If pragmatism means “what works as a comprehensive worldview,” then naturalism is arguably not pragmatic because it fails to provide a coherent or satisfactory account of fundamental aspects of human reality (consciousness, meaning, values, reason’s validity), as detailed in Premise 3. It might be pragmatic for building bridges or predicting planetary motion, but it’s arguably deeply unpragmatic for understanding what it means to be a conscious, rational, moral agent in a world with objective truths. Pluralism, by acknowledging different domains and methods, is arguably more pragmatically successful as a philosophical framework because it provides conceptual resources to engage meaningfully with the full spectrum of human experience and inquiry, not just the physically quantifiable parts.
    • Naturalism May Rely on Pragmatism for its Own Foundations: Your text suggests (Premises 1 & 4) that naturalists, when pushed on how they justify the reliability of reason or the empirical method itself, might retreat to a pragmatic defense (“these methods just work”). If naturalism must ultimately appeal to pragmatism to ground its own core principles, it’s in a weak position to then turn around and critique pluralism solely on pragmatic grounds, especially when pluralism can argue it is more pragmatically successful in making sense of all of reality. This creates a kind of “pragmatism of the gaps” where pragmatism is invoked precisely where naturalism’s internal justification fails.

    In summary, the naturalist critiques of pluralism based on parsimony and pragmatism often miss the mark. Naturalism’s parsimony is frequently achieved by ignoring significant data or by subtly expanding its ontology, undermining the claim to unique simplicity. Its appeal to pragmatism often confuses the success of scientific method (which pluralism utilizes) with the philosophical adequacy of metaphysical naturalism as a total worldview, and ignores naturalism’s own potential reliance on pragmatic grounds it cannot fully justify. Pluralism, while positing a richer ontology, can argue it offers a more genuinely explanatory parsimony and a more comprehensive pragmatism by acknowledging the irreducible complexity of reality.

    Metaphysical pluralism doesn’t entail supernaturalism or theism by necessity. One can reject both naturalism and supernaturalism by acknowledging that reality may include non-physical aspects (consciousness, mathematical truths, values) without positing supernatural entities. Philosophers like Thomas Nagel, John Searle, and David Chalmers have developed non-materialist frameworks that don’t entail theism.

    Conclusion

    Naturalism fails as a comprehensive worldview. Its success in explaining physical phenomena doesn’t justify its extension to all aspects of reality. Its persistent explanatory gaps in consciousness, rationality, and value—coupled with its inability to justify its own foundations—reveal its fundamental inadequacy.

    A truly rational approach follows evidence where it leads, even when it points beyond the boundaries of naturalistic explanation. This isn’t an abandonment of rationality but its fulfillment—acknowledging that different aspects of reality may require different modes of understanding.

    Metaphysical pluralism offers a more coherent framework that honors the multidimensional character of reality. It maintains the empirical rigor of science within its proper domain while recognizing that human experience encompasses dimensions that transcend physical reduction. In doing so, it avoids both the reductionism of strict naturalism and the supernaturalism it rightly criticizes, providing a middle path that better accounts for the full spectrum of human knowledge and experience.

  • An Argument for Agent Causation in the Origin of DNA’s Information

    An Argument for Agent Causation in the Origin of DNA’s Information

    NOTE: This is a design argument inspired by Stephen Meyer‘s design argument from DNA. Importantly, specified complexity is changed for semiotic code (which I feel is more precise) and intelligent design is changed to agent causation (which is more preferencial).

    This argument posits that the very nature of the information encoded in DNA, specifically its structure as a semiotic code, necessitates an intelligent cause in its origin. The argument proceeds by establishing two key premises: first, that semiotic codes inherently require intelligent (agent) causation for their creation, and second, that DNA functions as a semiotic code.

    Premise 1: The Creation of a Semiotic Code Requires Agent Causation (Intelligence)

    A semiotic code is a system designed for conveying meaning through the use of signs. At its core, a semiotic code establishes a relationship between a signifier (the form the sign takes, e.g., a word, a symbol, a sequence) and a signified (the concept or meaning represented). Crucially, in a semiotic code, this relationship is arbitrary or conventional, not based on inherent physical or chemical causation between the signifier and the signified. This requires an interpretive framework – a set of rules or a system – that is independent of the physical properties of the signifier itself, providing the means to encode and decode the meaning. The meaning resides not in the physical signal, but in its interpretation according to the established code.

    Consider examples like human language, musical notation, or traffic signals. The sound “stop” or the sequence of letters S-T-O-P has no inherent physical property that forces a vehicle to cease motion. A red light does not chemically or physically cause a car to stop; it is a conventionally assigned symbol that, within a shared interpretive framework (traffic laws and driver understanding), signifies a command to stop. This is distinct from a natural sign, such as smoke indicating fire. In this case, the relationship between smoke and fire is one of direct, necessary physical causation (combustion produces smoke). While an observer can interpret smoke as a sign of fire, the connection itself is a product of natural laws, existing independently of any imposed code or interpretive framework.

    The capacity to create and utilize a system where arbitrary symbols reliably and purposefully convey specific meanings requires more than just physical processes. It requires the ability to:

    Conceive of a goal: To transfer specific information or instruct an action.

    Establish arbitrary conventions: To assign meaning to a form (signifier) where no inherent physical link exists to the meaning (signified).

    Design an interpretive framework: To build or establish a system of rules or machinery that can reliably encode and decode these arbitrary relationships.

    Implement this system for goal-directed action: To use the code and framework to achieve the initial goal of information transfer and subsequent action based on that information.

    This capacity to establish arbitrary, rule-governed relationships for the purpose of communication and control is what we define as intelligence in this context. The creation of a semiotic code is an act of imposing abstract order and meaning onto physical elements according to a plan or intention. Such an act requires agent causation – causation originating from an entity capable of intentionality, symbolic representation, and the design of systems that operate based on abstract rules, rather than solely from the necessary interactions of physical forces (event causation).

    Purely natural, undirected physical processes can produce complex patterns and structures driven by energy gradients, chemical affinities, or physical laws (like crystal formation, which is a direct physical consequence of electrochemical forces and molecular structure, lacking arbitrary convention, an independent interpretive framework, or symbolic representation). However, they lack the capacity to establish arbitrary conventions where the link between form and meaning is not physically determined, nor can they spontaneously generate an interpretive framework that operates based on such non-physical rules for goal-directed purposes. Therefore, the existence of a semiotic code, characterized by arbitrary signifier-signified links and an independent interpretive framework for goal-directed information transfer, provides compelling evidence for the involvement of intelligence in its origin.

    Premise 2: DNA Functions as a Semiotic Code

    The genetic code within DNA exhibits the key characteristics of a semiotic code as defined above. Sequences of nucleotides (specifically, codons on mRNA) act as signifiers. The signifieds are specific amino acids, which are the building blocks of proteins.

    Crucially, the relationship between a codon sequence and the amino acid it specifies is not one of direct chemical causation. A codon (e.g., AUG) does not chemically synthesize or form the amino acid methionine through a direct physical reaction dictated by the codon’s molecular structure alone. Amino acid synthesis occurs through entirely separate biochemical pathways involving dedicated enzymes.

    Instead, the codon serves as a symbolic signal that is interpreted by the complex cellular machinery of protein synthesis – the ribosomes, transfer RNAs (tRNAs), and aminoacyl-tRNA synthetases. This machinery constitutes the interpretive framework.

    Here’s how it functions as a semiotic framework:

    • Arbitrary/Conventional Relationship: The specific assignment of a codon triplet to a particular amino acid is largely a matter of convention. While there might be some historical or biochemical reasons that biased the code’s evolution, the evidence from synthetic biology, where scientists have successfully engineered bacteria with different codon-amino acid assignments, demonstrates that the relationship is not one of necessary physical linkage but of an established (and in this case, artificially modified) rule or convention. Different codon assignments could work, but the system functions because the cellular machinery reliably follows the established rules of the genetic code.
    • Independent Interpretive Framework: The translation machinery (ribosome, tRNAs, synthetases) is a complex system that reads the mRNA sequence (signifier) and brings the correct amino acid (signified) to the growing protein chain, according to the rules encoded in the structure and function of the tRNAs and synthetases. The meaning (“add this amino acid now”) is not inherent in the chemical properties of the codon itself but resides in how the interpretive machinery is designed to react to that codon. This machinery operates independently of direct physical causation by the codon itself to create the amino acid; it interprets the codon as an instruction within the system’s logic.
    • Symbolic Representation: The codon stands for an amino acid; it is a symbol representing a unit of meaning within the context of protein assembly. The physical form (nucleotide sequence) is distinct from the meaning it conveys (which amino acid to add). This is analogous to the word “cat” representing a feline creature – the sound or letters don’t physically embody the cat but symbolize the concept.

    Therefore, DNA, specifically the genetic code and the translation system that interprets it, functions as a sophisticated semiotic code. It involves arbitrary relationships between signifiers (codons) and signifieds (amino acids), mediated by an independent interpretive framework (translation machinery) for the purpose of constructing functional proteins (goal-directed information transfer).

    Conclusion: Therefore, DNA Requires Agent Causation in its Origin

    Based on the premises established:

    1. The creation of a semiotic code, characterized by arbitrary conventions, an independent interpretive framework, and symbolic representation for goal-directed information transfer, requires the specific capacities associated with intelligence and agent causation (intentionality, abstraction, rule-creation, system design).
    2. DNA, through the genetic code and its translation machinery, functions as a semiotic code exhibiting these very characteristics.

    It logically follows that the origin of DNA’s semiotic structure requires agent causation. The arbitrary nature of the code assignments and the existence of a complex system specifically designed to read and act upon these arbitrary rules, independent of direct physical necessity between codon and amino acid, are hallmarks of intelligent design, not the expected outcomes of undirected physical or chemical processes.

    Addressing Potential Objections:

    • Evolution and Randomness: While natural selection can act on variations in existing biological systems, it requires a self-replicating system with heredity – which presupposes the existence of a functional coding and translation system. Natural selection is a filter and modifier of existing information; it is not a mechanism for generating a semiotic code from scratch. Randomness, by definition, lacks the capacity to produce the specified, functional, arbitrary conventions and the integrated interpretive machinery characteristic of a semiotic code. The challenge is not just sequence generation, but the origin of the meaningful, rule-governed relationship between sequences and outcomes, and the system that enforces these rules.
    • “Frozen Accident” and Abiogenesis Challenges: Hypotheses about abiogenesis and early life (like the RNA world) face significant hurdles in explaining the origin of this integrated semiotic system. The translation machinery is a highly complex and interdependent system (a “chicken-and-and egg” problem where codons require tRNAs and synthetases to be read, but tRNAs and synthetases are themselves encoded by and produced through this same system). The origin of the arbitrary codon-amino acid assignments and the simultaneous emergence of the complex machinery to interpret them presents a significant challenge for gradual, undirected assembly driven solely by chemical or physical affinities.
    • Biochemical Processes vs. Interpretation: The argument does not claim that a ribosome is a conscious entity “interpreting” in the human sense. Instead, it argues that the system it is part of (the genetic code and translation machinery) functions as an interpretive framework because it reads symbols (codons) and acts according to established, arbitrary rules (the genetic code’s assignments) to produce a specific output (amino acid sequence), where this relationship is not based on direct physical necessity but on a mapping established by the code’s design. This rule-governed, symbolic mapping, independent of physical causation between symbol and meaning, is the defining feature of a semiotic code requiring an intelligence to establish the rules and the system.
    • God-of-the-Gaps: This argument is not based on mere ignorance of a natural explanation. It is a positive argument based on the nature of the phenomenon itself. Semiotic codes, wherever their origin is understood (human language, computer code), are the products of intelligent activity involving the creation and implementation of arbitrary conventions and interpretive systems for goal-directed communication. The argument posits that DNA exhibits these defining characteristics and therefore infers a similar type of cause in its origin, based on a uniformity of experience regarding the necessary preconditions for semiotic systems.

    In conclusion, the sophisticated, arbitrary, and rule-governed nature of the genetic code and its associated translation machinery point to it being a semiotic system. Based on the inherent requirements for creating such a system—namely, the capacities for intentionality, symbolic representation, rule-creation, and system design—the origin of DNA’s information is best explained by the action of an intelligent agent.