Creation Questions

Category: Physics

  • Human Eyes – Optimized Design

    Human Eyes – Optimized Design

    Is the human eye poorly designed? Or is it optimal?

    If you ask most proponents of modern evolutionary theory, you will often hear that the eye is a pinnacle of unfortunate evolutionary history and dysteleology.

    There are three major arguments that are used in defending this view:

    The human eye:

    1. is inverted (retina) and wired backwards
    2. has a blind spot due to nerve exit
    3. Is fragile due to retinal detachment

    #1 THE HUMAN EYE IS INVERTED

    The single most famous critique is, of course, the backward wiring of the retina. An optimal sensor should use its entire surface area for data collection, right? The vertebrate eye requires obstruction of the eye-path by axons and capillaries before it hits the photoreceptors.

    Take the cephalopod eye: it has an everted retina, the photo receptors face the light and the nerves are behind them meaning there is no need for a blind spot. The human reversed wiring represents a mere local (rather than global) maximum where the eye could only optimize so far due to its evolutionary history.

    Yet, this argument misses non-negotiable constraints. There is a metabolic necessity for the human eye which doesn’t exist in the squid or octopus.

    Photoreceptors (the rods and cones) have the highest metabolic rate of any cell in the body. They generate extreme heat and oxygen levels and undergo constant repair from constant reaction from photons. The energy demand is massive. This is an issue of thermoregulation, not just optics.

    The reason this is important is because the vertebrate eye is structured with an inverted retina precisely for the survival and longevity of these high-energy photoreceptors. These cells require massive, continuous nutrient and oxygen delivery, and rapid waste removal.

    The current inverted orientation is the only geometric configuration that allows the photoreceptors to be placed in direct contact with the Retinal Pigment Epithelium (RPE) and the choroid. The choroid, a vascular layer, serves as the cooling system and high-volume nutrient source, similar to a cooling unit directly attached to a high-performance processor.

    If the retina were wired forward, the neural cabling would form a barrier, blocking the connection between the photoreceptors and the choroid. This would inevitably lead to nutrient starvation and thermal damage. Not only that, but human photoreceptors constantly shed toxic outer segments due to damage, which must be removed via phagocytosis by the RPE. The eye needs the tips of the photoreceptors to be physically embedded in the RPE. 

    If the nerve fibers were placed in front they would form a barrier, preventing waste removal. This specific geometry is a geometric imperative for long-term molecular recycling and allows for eyes that last for 80+ years on the regular.

    Critics often insist however that even given the neural and capillary layers being necessary for metabolism, it is still a poor design because they block or scatter incoming light. 

    Yet, research has demonstrated that Müller glial cells span the thickness of the retina and act as essentially living fiber-optic cables. These cells possess a higher refractive index than the surrounding tissue, which gives them the capability to channel light directly to the cones with minimal scattering.

    So this criticism actually goes from being a poor design choice into an awesome low-pass filter which improves the signal-to-noise ratio and visual acuity of the human eye.

    But wait, there’s more! The neural layers contain yellow pigments (lutein and zeaxanthin) which absorb excess blue and ultraviolet light that is highly phototoxic! This layer is basically a forcefield against harmful rays (photo-oxidative damage) which extends the lifespan of these super delicate sensors.

    #2 THE HUMAN EYE HAS A BLIND SPOT

    However, the skeptics will still push back (which leads to point number 2): But surely a good design would not include a blind spot where the optic nerve runs through! And indeed this point is a fairly powerful one at a glance. But on further inspection, we see that this exit point, where literally millions of nerve fibers bundle together to pass the photoreceptors, is an example of optimized routing and not a critical flaw of any kind.

    This is true for many reasons. For one, by having the nerves bundle into this reinforced exit point, in this way, maximized the structural robustness of the remaining retina. Basically, if it were not this way, and the nerve fibers exited individually or even in small clusters across the retina, it would radically lower the integrity of the whole design. It would make the retina prone to tearing during rapid eye movements (saccades). In other words, we wouldn’t be getting much REM sleep! That, but also, we’d be missing out on most looking around of any kind.

    I’d say, even if that was the only advantage, the loss of a tiny fraction of our visual field is worth the trade-off.

    Second, and this is important, the blind spot is functionally irrelevant. What do I mean by that? I mean that humans were designed with two eyes for the purpose of seeing depth-of-field, i.e., understanding where things are in space. You can’t do that with one eye, so that’s not an option. With two eyes, the functional retina of the left eye covers the blind spot of the right eye, and vice versa. There is no problem in this design if both the vision is covered and depth-of-field are covered 100% accurately: which they are.

    Third, the optic disc is also used for integrated signal processing, containing melanopsin-driven cells that calibrate brightness perception for the entire eye, using the exit cable as a sensor probe. That means that the nerves also detect brightness and run the logistics in a localized region which is incredibly efficient.

    #3 THE HUMAN EYE IS VULNERABLE

    That is, the vulnerability specifically refers to retinal detachment. That is when the neural retina separates from the RPE. Why does this happen? It is a consequence of the retina being held loosely against the choroid, largely by hydrostatic pressure. Critics call this a failure point. Wouldn’t a good design be one where the RPE is solidly in place, especially if it needs to be connected to the retina? Well… no, not remotely.

    The RPE must actively transport massive amounts of fluid (approximately 10 liters per day) out of the subretinal space to the choroid to prevent edema (swelling) and maintain clear vision. A mechanically fused retina would impede this rapid fluid transport and waste exchange. Basically, the critics offer a solution which is really a non-solution. There is no possible way the eye could function at all by the means they suggest as the alternative “superior” version.

    So, what have we learned?

    The human eye is not a collection of accidents, but a masterpiece of constrained optimization. When the entire system (eye and brain) is evaluated, the result is astonishing performance. The eye achieves resolution at the diffraction limit (the theoretical physical limit imposed by the wave nature of light!) at the fovea, meaning it is hitting the maximum acuity possible for an aperture of its size.

    The arguments that the eye is “sub-optimal” often rely on comparing it to the structurally simpler cephalopod eye. Yet, cephalopod eyes lack trichromatic vision (they don’t see color like we do), have lower acuity (on the scale of hundreds of times worse clarity), and only function for a lifespan of 1–2 years (whereas the human eye must self-repair and maintain high performance for eight decades). The eye’s complexity—the Müller cells, the foveal pit, and the inverted architecture—are the necessary subsystems required to achieve this maximal performance within the constraints of vertebrate biology and physics.

    That’s not even getting to things like mitochondrial microlens in our cells which are essential for processing light. Recent research suggests that mitochondria in cone photoreceptors may actually function as micro-lenses to concentrate light, adding another layer of optical optimization. Optimization which would need to be there, perhaps a lot earlier than even the reversed lens structure.

    The fact that the eye is so optimal still remains, despite the critics best attempts at thwarting it. Therefore, the question remains, how could something so optimized evolve by random chance mutation, as well as so early and often in the history of biota?

  • Do Creationists Make Predictions?

    Do Creationists Make Predictions?

    A common criticism against scientists who espouse a young-age and global flood is that they don’t make testable predictions. However, a closer look reveals that creation science has a robust history of making predictions that challenge mainstream assumptions. To respond to the critic’s claim, we will look at eight predictions of note which are rooted in a biblical perspective of history, have been repeatedly validated, and prompt the need for a re-evaluation of the established paradigm.

    1. The Rapid Formation of Opals

    Dr. Len Crampton, a creationist geologist from New South Wales, Australia, dared to question the conventional timescale for opal formation. Mainstream geology posits that opals form over millions of years through slow, gradual processes. However, Crampton, drawing upon the catastrophic implications of the global Flood, predicted that opals could form rapidly under conditions of silica-rich solutions and rapid deposition. His experimental work demonstrated the feasibility of this rapid formation, challenging the long-age assumptions of conventional geology. While consensus geology made a story about opals which fit their narrative, creationists found the practical mechanism behind opal creation.

    2. Carbon-14 in “Ancient” Samples

    One of the most contentious areas of debate is the presence of Carbon-14 (C-14) in samples deemed millions of years old. Conventional radiometric dating assumes that C-14, with its relatively short half-life of 5,730 years, should be undetectable in samples older than 100,000 years. Yet, creation scientists, including those involved with the RATE (Radioisotopes and the Age of The Earth) project, have consistently predicted and found measurable C-14 in fossils, coal, and diamonds (Baumgardner, 2003). This finding directly challenges the long-age interpretations and raises questions about the assumptions underlying radiometric dating, but, significantly, it was predicted by creationists.

    3. Mature Galaxies and the Absence of Population III Stars

    In the realm of cosmology, Dr. Jason Lisle predicted that the James Webb Space Telescope (JWST) would reveal mature galaxies at great distances and a lack of Population III stars, the hypothetical first stars formed after the Big Bang. This prediction stands in stark contrast to standard cosmological models, which require long periods for galaxy formation and predict the existence of these primordial stars. The early JWST data has aligned with Lisle’s prediction, prompting a re-evaluation of current cosmological timelines. Another prediction in the bag.

    4. The Functionality of “Junk” DNA

    Evolutionary theory initially proposed that non-coding DNA was “junk,” remnants of evolutionary processes with no function. However, creation scientists, including Dr. Robert Carter, predicted that this “junk” DNA would be found to have important functions (Carter, 2010). The ENCODE project and subsequent research have demonstrated widespread biochemical activity within non-coding DNA, indicating its crucial roles in gene regulation and other cellular processes. This discovery challenges the notion of “junk” DNA and supports the concept of intelligent design.

    5. Helium Diffusion in Zircon Crystals

    Back to geology. In 1982, Dr. Robert Gentry discovered that the nuclear-decay-generated helium in little crystals in granites called zircons was too high for the rocks to have undergone a constant decay rate (Gentry, 1986). His observation lead to Dr. Russell Humphreys prediction during the early stages of the RATE project (Humphreys, 2000, p. 348, Figure 7), which were verified by an external laboratory, challenged the conventional radiometric dating assumptions. The high retention rates of helium in zircon crystals indicate that they cannot be millions of years old. The data fit his prediction, as shown below, perfectly.

    6. Cool Subducted Zones and Rapid Plate Tectonics

    Dr. John Baumgardner, a geophysicist, predicted that subducted lithospheric zones in the mantle would be cooler than expected (Baumgardner, 1994), due to rapid plate tectonics during the Flood. Observations have confirmed these cooler zones, supporting the Catastrophic Plate Tectonics (CPT) model.

    7. Lack of Metamorphosis in Folded Rock Layers

    Geologist Dr. Andrew Snelling predicted that Tapeats sandstone samples in bends would not exhibit metamorphic change to the minerals, despite the folding of the layers. This is because he predicted that all the sedimentary layers were laid down during the flood and that seismic activity below caused the layers to deform over the hardened faults below. Snelling et al. investigated the Tapeats and found no metamorphosing (Snelling, 2021). This evidence supports the prediction that these rocks were bent while still soft and it refutes the mainstream science prediction of ductile deformation (immense pressure and heat over time which should result in metamorphic changes), demonstrating that the folding occurred rapidly, before the rocks had time to metamorphose.

    8. Human Genetic Diversity

    Creation models predicted a relatively recent origin for humanity, with low genetic diversity. Genetic studies, including those on mitochondrial DNA and the Y chromosome, have supported this prediction, pointing to a relatively recent common ancestry.


    These are my top eight examples which highlight the predictive power of the creationist model. These predictions and their verifications dispel the myth that “creationists don’t make predictions” and, hopefully, give you a deeper appreciation for the robustness and explanatory power of the creationist worldview.

    Citations:

    1. John Baumgardner, J. R. (2003). Carbon-14 evidence for a recent global flood and a young age of the Earth. In Proceedings of the Fifth International Conference on Creationism (pp. 129-142). Creation Science Fellowship.
    2. Carter, R. W. (2010). The non-coding genome. Journal of Creation, 24(3), 116-123.
    3. Gentry, R. V. (1986). Radiohalos in polonium 218: evidence of a pre-cambrian granite. Science, 234(4776), 561-566.
    4. Humphreys, D. R. (2000). Accelerated nuclear decay: evidence for young-age radiocarbon dating. In Radioisotopes and the Age of The Earth: Results of a Young-Earth Creationist Research Initiative (pp. 333-379). Institute for Creation Research. p. 348, Figure 7.
    5. Baumgardner, J. R. (1994). Runaway subduction as the driving mechanism for the Genesis flood. In Proceedings of the Third International Conference on Creationism (pp. 63-75). Creation Science Fellowship.
    6. Snelling, A. A. (2021). The Petrology of the Tapeats Sandstone, Tonto Group, Grand Canyon, Arizona. Answers Research Journal, 14, 159–254.

  • Heisenberg, Kant, and the Limits of Science

    Heisenberg, Kant, and the Limits of Science

    In the realm of scientific inquiry, the intersection of epistemology (the study of knowledge) and physics often leads to profound philosophical debates. One such debate, highlighted by the clash between Kant’s universal claims and Heisenberg’s quantum observations, raises critical questions about the nature of causality and the limits of scientific knowledge.

    Kant posited universal axioms about metaphysics and epistemology, suggesting that certain principles, like causality, are a priori—foundational to all experience. Heisenberg, however, proposed that these principles might not apply in the quantum realm, where observations seem to reveal phenomena without clear causal explanations. This divergence raises a fundamental question: Can scientific theories, particularly those in quantum physics, challenge or redefine the very foundations of how we understand knowledge?

    The Challenge to Universal Causality

    Heisenberg, in his work “Physics and Beyond,” recounts a conversation with Grete Hermann, a Kantian philosopher, who argued that causality is not an empirical assertion but a necessary presupposition for all experience. Hermann emphasized that without a strict relationship between cause and effect, our observations would be mere subjective sensations, lacking objective correlates. She questioned how quantum mechanics could “relax” the causal law and still claim to be a branch of science.

    Heisenberg countered that in quantum mechanics, we only have access to statistical averages, not underlying processes. He cited the example of Radium B atoms emitting electrons, where the timing and direction of emission appear stochastic. He argued that extensive research reveals behaviors with no discernible causes, suggesting that causality breaks down at the quantum level.

    Creationist Perspectives on Causality and Randomness

    From a creationist perspective, the concept of randomness must be carefully examined. As David Bohm suggests in “Causality and Chance in Modern Physics,” random processes can exist within objects that are nonetheless real and independent of observation. This aligns with the idea that even seemingly random events may be governed by underlying, complex causal laws, perhaps beyond our current comprehension.

    Consider the Created Heterozygosity Hypothesis, which posits that organisms were created with “front-loaded” genomes, containing a high degree of genetic variation. This variation can manifest as apparent randomness in biological processes, but it does not negate the existence of underlying design and purpose.

    Furthermore, the concept of information theory, a key aspect of intelligent design, emphasizes that information is always the product of intelligent agency. The complexity and specificity observed in quantum phenomena may point to an underlying intelligence that operates beyond the limitations of our current scientific models.

    Addressing the Limits of Scientific Knowledge

    Hermann rightly pointed out that the absence of a discovered cause does not imply the absence of a cause. She argued that physicists should continue searching for underlying causes rather than abandoning the principle of causality altogether. This aligns with the creationist view that our understanding of the natural world is incomplete, and that further investigation may reveal deeper levels of design and purpose.

    The debate between Heisenberg and Hermann highlights the limitations of science. As creationists, we acknowledge that science is a powerful tool for understanding the natural world, but it is not the ultimate arbiter of truth. Methodological naturalism, the assumption that all phenomena can be explained by natural causes, arbitrarily excludes the possibility of non-natural agency.

    The Necessity of Universal Presuppositions

    Kant’s emphasis on universal presuppositions, like causality, underscores the importance of a solid epistemological foundation. Without these foundational beliefs, our ability to claim objective knowledge about the world is undermined. As Friedrich clarified, “Every perception refers to an observational situation that must be specified if experience is to result. The consequence of a perception can no longer be objectified in the manner of classical physics.” However, this does not mean that Kant’s principles are wrong, but that our understanding of observation has changed.

    The creationist worldview recognizes that the universe is the product of an intelligent Creator, whose design and purpose are evident in the natural world. Therefore, the search for causal explanations should not exclude the possibility of non-natural or intelligent causes.

    Conclusion: A Call for Intellectual Honesty

    The philosophical tension between Kant and Heisenberg reveals a fundamental issue at the intersection of epistemology and quantum physics. Heisenberg’s challenge to universal causality, while based on observed phenomena, ultimately undermines the foundation of scientific knowledge.

    As creationists, we advocate for intellectual honesty and a comprehensive approach to scientific inquiry. We acknowledge the limits of science and the importance of universal presuppositions, such as causality. We recognize that our understanding of the universe is incomplete and that further investigation, guided by both scientific rigor and a biblical worldview, may reveal deeper levels of design and purpose.

    The debate over causality in quantum mechanics should remind us that scientific advances, while valuable, should not lead us to abandon the foundational principles that make knowledge possible. Instead, we should embrace a holistic approach that integrates scientific observations with a robust epistemological framework, recognizing the limits of human understanding and the possibility of non-natural causes.

    Sources:

    Bohm, D. (1957). Causality and Chance in Modern Physics. Routledge & Kegan Paul.

    Heisenberg, W. (1971). Physics and Beyond: Encounters and Conversations. Harper & Row.