4.1 Introduction

Various properties of the physical world have been suggested as indicative of the work of ‘a designer with the intellectual properties (knowledge, purpose, understanding, foresight, wisdom, intention) necessary to design the things exhibiting the special properties in question’ (Ratzsch and Koperski 2019). These properties include the ‘fine-tuning’ of the inorganic realm for supporting life, orderliness, uniformity, contrivance, adjustment of means to ends, particularly exquisite complexity, particular types of functionality, delicacy, integration of natural laws, improbability, the intelligibility of nature, the directionality of evolutionary processes, aesthetic characteristics (beauty, elegance, and the like), and apparent purpose and value (including the aptness of our world for the existence of moral value and practice) (Ratzsch and Koperski 2019). In this book, I shall focus on two such properties: fine-tuning and orderliness, although it should be noted that the other properties require explanation as well and I shall discuss them occasionally in what follows.

4.2 Fine-Tuning and Orderliness

4.2.1 Fine-Tuning

Concerning the ‘fine-tuning’ of the universe, Robin Collins explains,

The fundamental structure of the universe is ‘balanced on a razor’s edge’ for the existence of life …. This precise setting of the structure of the universe for life is called the ‘fine-tuning of the cosmos’. This fine-tuning falls into three major categories: that of the laws of nature, that of the constants of physics, and that of the initial conditions of the universe. (Collins 2009, p. 202)

It has been objected that there could be other forms of life which do not require a fine-tuned universe (Stenger 2013). In reply, what the calculations have shown is that universes with different laws, constants, and boundary conditions would most likely give rise to much less structure and complexity, which would be incompatible with any kind of life, not merely life-as-we-know-it (Lewis and Barnes 2016, pp. 255–274). This is illustrated by the following two examples of fine-tuning:

First, the cosmological constant characterizes the energy density of the vacuum which is responsible for the acceleration of the universe’s expansion. On theoretical grounds, one would expect it to be larger than its actual value by an immense number of magnitudes (between 1050 and 10123), but only values a few order of magnitude larger than the actual value are compatible with the formation of galaxies (Friederich 2018). Lewis and Barnes (2016, p. 164) remark:

The (effective) cosmological constant is clearly fine-tuned. It’s just about the best fine-tuning case around. There is no simpler way to make a universe lifeless than to make it devoid of any structure whatsoever. Make the cosmological constant just a few orders of magnitude larger and the universe will be a thin, uniform hydrogen and helium soup, a diffuse gas where the occasional particle collision is all that ever happens. Particles spend their lives alone, drifting through emptying space, not seeing another particle for trillions of years and even then, just glancing off and returning to the void.

The fine-tuning of the cosmological constant has recently been challenged by physicist Fred Adams (2019), who argues that the life-permitting variation of the constants is wider in some respects than previously thought. Nevertheless, he acknowledges that ‘Even if the parameters of physics and cosmology can deviate from their values in our universe by orders of magnitude, “unnaturally small” ratios are still required: For example, the cosmological constant can vary over a wide range, but must be small compared to the Planck scale’ (pp. 141–2). In other words, the range is still not wide in an absolute sense, and ‘fine-tuning’ (at the level of the ratios) is still required.

Second, concerning entropy, the initial state of the Big Bang must be extremely highly ordered (i.e. low entropy) with a very high amount of usable energy. The probability that a universe chosen at random would possess the necessary degree of order that ours does (and so possesses a second law of thermodynamics according to which the universe is progressing from a state of order to states of increasing disorder) is 1 in 1010(123). If the universe were less ordered than this, the matter in it would have collapsed through friction into black holes (which represent extreme states of disorder and incompatible with any form of life), rather than form stars (Holder 2004, pp. 38–39, citing Penrose 1989, pp. 339–345, who notes that 1010(123) is a number so large that the noughts cannot be written down in full even if each of the 1080 protons of our universe were to be used to write down one nought).

Stenger (2013) objects that calculations of improbabilities often fail to consider the consequences of varying more than one parameter at a time. In reply, studies of the complete parameter space of (segments of) the Standard Model indicate that the life-permitting range in multidimensional parameter space is likely very small (Barnes 2012, Sect. 4.2). Without fine-tuning, the universe would have become a ‘rubble’ after the Big Bang, in which case not only ‘life as we know it’ would not exist, any organized matter with ability to reproduce would not exist. Against the supposition that proponents of fine-tuning erroneously presuppose that only carbon-based life is possible, Hawthorne and Isaacs (2018, p. 147) note that ‘it would be very hard to have physical life in any form if an inhospitable cosmological constant led to a universe that expanded so rapidly that particles did not interact with one another or to a universe that collapsed back in on itself only moments after its generation’. Likewise, Rasmussen and Leon (2018, pp. 103-4) observe:

A universe with nothing but empty space has no ingredients for life … a million motionless particles will never produce an amoeba … a universe with only particles that constantly repel each other will produce an endless scatter, with no complex unities, anywhere, ever … a universe with things that only attract each other will only form a blob, forever.

4.2.2 Orderliness

4.2.2.1 Introduction

Concerning the Teleological Argument from orderliness, an example is The Fifth Way of Aquinas’ famous five proofs for the existence of God (Summa Theologica, Part I, Question 2, Article 3), in which Aquinas argues from ‘things which lack knowledge, such as natural bodies … acting always, or nearly always, in the same way’.Footnote 1

It is an irresistible fact that the natural world appears to exhibit certain regular patterns of behaviour. When one gazes into the night sky, one cannot help but wonder why the stars and planets move according to a certain order. Likewise, the alternation of seasons, the formation of clouds and rain, the sustenance of life on earth, and so on are also in accordance with a certain order. This order is characterized by law-like regularities which are of a mathematical nature and are predictably the same everywhere in the universe.Footnote 2

Cambridge physicist John Polkinghorne argues that those who work in fundamental physics encounter a world in which large-scale structures and small-scale processes are alike characterized by a wonderful order that is expressible in concise and elegant mathematical terms, citing Paul Dirac’s well-known belief that the laws of nature should be expressed in beautiful equations (Polkinghorne 1998, p. 2).

Polkinghorne explains that mathematical beauty involves such qualities as economy and elegance, and that extensive consequences are found to flow from seemingly simple initial definitions, as when the endless baroque complexities of the Mandelbrot set are seen to derive from a specification that can be written down in a few lines. Polkinghorne writes,

300 years of enquiry have shown that it is just such mathematically beautiful theories that prove to have the long-term fertility of explanation that convinces us that they are indeed describing aspects of the way things are. In other words, some of the most beautiful patterns that the mathematicians can think about in their studies are found actually to be present in the structure of the physical world around us. (Polkinghorne 2011, pp. 72-3)

Nevertheless, McGrath (2018, pp. 118-119) observes that ‘the concept of “beauty” is subjective and contested, leading some to make the “eminently rational decision” to pursue “indicators of truth in disregard of beauty.” Properties of a theory that have at some point been considered to be aesthetically attractive have at other times been considered neutral or displeasing.’

Regardless of whether ‘beauty’ is present or not, the mathematical describability of the order is indisputable. With regard to this order, Oxford physicist Roger Penrose confesses that ‘it remains a deep puzzle why mathematical laws should apply to the world with such phenomenal precision … Moreover, it is not just the precision but also the subtle sophistication and mathematical beauty of these successful theories that is profoundly mysterious’ (Penrose 2004, pp. 20–21).

After surveying the discoveries of the laws of nature in over 1000 pages of his magisterial book The Road to Reality, Penrose writes: ‘The most important single insight that has emerged from our journey, of more than two and one-half millennia, is that there is a deep unity between certain areas of mathematics and the workings of the physical world’ (ibid., pp. 1033–1034). Citing the highly esteemed mathematical physicist Eugene Wigner’s (1960) lecture on the effectiveness of mathematics in the physical sciences, Penrose comments: ‘Not just the extraordinary precision, but also the subtlety and sophistication that we find in the mathematical laws operative at the foundations of physics seem to me to be much more than the mere expression of an underlying ‘order’ in the workings of the world’ (ibid., p. 1046n.34).

4.2.2.2 Objection: Human Creation

Some might think that, because we have invented the mathematics to characterize the way our world operates, it is not surprising that the universe operates according to mathematical patterns. Carrier (2003) claims that ‘any universe composed of conserved and discrete objects arranged into patterns in a multidimensional space will always be describable by mathematics. We invented mathematics just for that purpose: to describe such things.’ On this view, some sort of mathematical order or another has to apply to the universe, and one might claim that we just happen to live in the one we observe. Likewise, Livio notes that some have objected that mathematics is a human creation developed to characterize the operation of our world and to solve the problems our world presents. Nature, if it is explicable at all, has to be explicable in some form of language or model, and that mathematics is just that. Given this, it is hardly surprising that the universe operates according to mathematical patterns. Others have objected that mathematics may not explain every situation, and that to some extent scientists have cherry-picked what problems to work on based on those problems being amenable to mathematical treatment (Livio 2009, Chap. 9).

Wenmackers (2016) argues that our knowledge and use of mathematics may have arisen by the evolutionary process. For example, proto-mathematical capacities might have been useful in earlier evolutionary stages of our species; for example, being able to estimate and to compare the number of fruits hanging from different trees contributes to efficient foraging patterns. These capacities are therefore naturally selected and developed into our current power to think abstractly and to act with foresight. She concludes that the fact that our mathematical reasoning can be applied successfully is precisely why the traits that enable us to achieve this were selected in our biological evolution (p. 9).

Nevertheless, the above objections do not explain how physical entities could be of such a nature that allows a large number of phenomena to be mathematically describable and explicable in such a way that requires highly advanced intellect to work it out. The mathematics involved in describing our universe is not merely a few simple equations like 2+2=4, but highly sophisticated ones, and (contra Carrier) not any universe would be like this. Rather, the universe would have to be highly ordered, as implied by its describability by advanced mathematics.

The above point holds even if (as some have suggested) mathematics basically just describes conditionals of some sort or other, for the conditionals would not be as simple as ‘if you were to have 2 things, and add to them 2 more things, then you would have 4 things’. Rather, it would be something like, ‘if you were to have m, and add m to another m to another m ….90000000000000000 times, you will get a value for e, which is related to time x power, which is related to … etc.’ This conditional implies a huge, interconnected, highly ordered structure. A highly ordered structure is far less likely to be explainable by chance compared to a simply ordered one (see Sect. 4.4), and in order for a highly ordered structure to arise from simple laws, a high degree of order must already be in place in order for this ‘arising’ to happen (see Sects. 4.5 and 4.6). My argument does not require an appeal to ‘why God would particularly care about advanced mathematics’ (see Sect. 7.6); rather, it is based on exclusion (Sect. 7.5).

The multitude of mathematical equations with numerous variables reflect a highly ordered arrangement of the distinct objects which composed the physical world described by these equations. The patterns of order in multidimensional space and natural laws with systemic applicability reflect a huge interconnected structure with multiple parts. It would be unreasonable to explain away such a structure by saying that some sort of order or another has to apply to the components, and we just happen to discover the one we observe. Physicist Michael Heller remarks that the mathematical equations in physics can be treated by physicists as expressing a kind of software of the universe (Heller 2013, p. 594), and one would think that there cannot be a software without a software programmer. To establish the conclusion of design however requires ruling out other alternative hypotheses, which I undertake in the rest of this book. The point here is simply that, while the objections by Carrier et al. may explain the applicability of simple calculations, they do not explain the high degree of ordering of the physical world that is presupposed by the applicability of high-level mathematics.

A Kantian might explain mathematical discovery by arguing that mathematics is the conceptual framework through which we experience the phenomenal universe, while claiming that we know nothing about the noumenal universe. Nevertheless, in order for such highly sophisticated mathematics to successfully characterize the way our world operates, the objective world, that is, the universe-in-itself, must have a high degree of order. As Einstein observes, ‘even if man proposes the axioms of the theory, the success of such a project presupposes a high degree of ordering of the objective world, and this could not be expected a priori’ (Goldman 1997, p. 24).

Einstein’s argument is not based on the mere presence of order within our universe; it is the high degree of rationality and intelligibility of the order which the argument is based on. The particles of the universe are related to one other and many particles behave similarly, and the question that needs to be answered is, ‘Why are their relations and behaviour so rational, intelligible, highly ordered and forming such a huge interconnected structure, instead of being crude, simple and having an almost featureless order?’

Even at the quantum level, where things are often regarded as messy and counter-intuitive, various mathematical equations such as Schrodinger’s still hold, and this, as well as the widespread effectiveness of mathematics at the macro level, demands explanation.

Moreover, if ordering is an inevitable selection effect created by our act of perception as the Kantian asserts, why do we still find some things disordered or yet unintelligible and not see everything as a teleological structure (Barrow and Tipler 1986, p. 91, citing Janet, Trendelenburg and Herbart)? Holder (2004, p. 4) notes:

Kant’s position regarding the human imposition of order also does not seem to square with how scientists see the world. For example, quantum theory seems to be forced on us by the reality of the external world, which exhibits such strange and startling phenomena at the micro-level, rather than being a human creation imposed on the world.

In other words, contrary to the Kantian, the counter-intuitive nature of quantum physics indicates that the mathematical equations that are used to describe it are not merely the creation of our own minds, and those seeking an understanding as completely as possible must therefore ask what it could be that links together the reason within (mathematical thinking) and the reason without (the structure of the physical world) in this remarkable way (Polkinghorne 2011, pp. 72–73).

Additionally, as demonstrated in Chap. 1, the laws of logic is not merely our way of thinking but reflect the way mind-independent reality is (e.g. there cannot be a shapeless square in the mind-independent reality), and therefore these laws can be used to formulate an argument by exclusion for a Designer (see below).

4.2.2.3 Platonic Objection

It has been suggested that the reason why the ‘laws of physics’ are so well explained by mathematical descriptions is related to the postulation that the nature of the space of mathematical reality is Platonic (Penrose 2004, p. 1029).

However, the postulated existence of a Platonic world with abstract mathematical objects still does not explain why the Platonic world could be mapped onto the physical world via the power of human mental activity, and how mindless physical entities could have this orderly behaviour (Frederick 2013). Philosopher Roger Trigg (1993, pp. 186-187) observes that mathematical theories can exist but still not be about anything. And Stephen Hawking (1988, p. 126) had asked: ‘even if there is only one possible unified theory, it is just a set of rules and equations. What is it that breathes fire into the equations and makes a universe for them to describe?’

Cosmologist Max Tegmark (2008) has replied with a radical proposal that our physical universe is equivalent to an abstract mathematical structure.Footnote 3 This looks like Pythagoreanism reborn, whereby physical objects are somehow reduced to abstract mathematical structures (Dumsday 2019, p. 35). This proposal is related to a view known as Ontic Structural Realism (OSR). Proponents of OSR claim support from our best current theories in physics; for example, they claim that

traditional conceptions of individuality break down at the quantum level, such that the notion of particles as fundamental individual ‘objects’ with intrinsic identities should be abandoned in favour of fundamental relations; that the metaphysics of quantum field theory is best interpreted along structuralist lines, insofar as symmetries are best seen there as ontologically prior to fields; that the metaphysics of quantum gravity is best interpreted along structuralist lines, since particles are best seen there as deriving their identities from their structural context; that structuralism provides for a superior account of the metaphysics of spacetime; and that structuralism allows for a novel way of defusing the traditional debate over whether matter at the fundamental level is continuous or discrete, and, relatedly, provides a plausible way of reconceiving wave-particle duality. (Dumsday 2019, pp. 27-29)

However, not all proponents of OSR defend eliminativistOSR, which claims that, at the fundamental level, relations exist but objects either do not exist (‘There are no things. Structure is all there is’, Ladyman et al. 2007, p. 131)Footnote 4 or they exist but are nothing over and above their place/function in the relational structure which is ontological prior over them and the bearers of any property. There are more moderate versions of OSR which claim that objects and relations are symmetrically dependent with no ontological priority obtaining between them, and that there are objects which have ‘an intrinsic identity defined partly in terms of the possession of intrinsic properties and partly in terms of their place/function in the structure. As such, their identity is not wholly reducible to their structural role, yet they cannot exist independently of the structure’ (ibid., p. 30). These more moderate versions of OSR are compatible with the arguments I defend in this book.

On the one hand, there is no conclusive argument that compels the acceptance of the eliminativist version of OSR over the moderate version because, as Ladyman et al. (2007, p. 9) themselves observe, ‘science, usually and perhaps always, underdetermines the metaphysical answers we are seeking.’ They admit:

Of course, all the considerations from physics to which we have appealed do not logically compel us to abandon the idea of a world of distinct ontologically subsistent individuals with intrinsic properties. As we noted, the identity and individuality of quantum particles could be grounded in each having a primitive thisness, and the same could be true of spacetime points. (p. 154)

On the other hand, the more moderate versions ‘allow for the option of explaining the concretization of structure by reference to the concretization of its component objects, since on these versions of OSR the latter have at least some intrinsic identity conditions of their own, which could perhaps include whatever it is that provides for concretization’ (ibid., p. 36). Thus, objects are not ‘purely speculative philosophical toys’ (cf. Ladyman et al. 2007, p. 154) but explain concretization.

Moreover, the eliminativist OSR of Tegmark collapses the distinction between abstract and the concrete physical; this is metaphysically dubious, since unlike physical entities, abstract entities do not have causal powers. Hence, Tegmark’s proposal that our universe is an abstract mathematical structure still does not explain how the entities in our universe could causally interact in the orderly way noted above. Ladyman et al., who affirm eliminativistOSR, say, ‘What makes the structure physical and not mathematical? That is a question that we refuse to answer. In our view, there is nothing more to be said about this that doesn’t amount to empty words’ (Tegmark 2008, p. 158), claiming that standard methods of distinguishing the concrete from the abstract by appealing to causal efficacy are unworkable for fundamental physics (pp. 159–161). However, as I have argued in Chaps. 2 and 3, causation is necessary for and compatible with fundamental physics, and what grounds one event (change) following another (i.e. what grounds their relation) are causal properties, and the Modus Tollens argument for Causal Principle demonstrates that events do not begin uncaused. While a naturalist Platonist might be able to explain the permanence of mathematical truths by appealing to timeless abstract objects, abstract objects by themselves cannot explain why physical entities follow complicated mathematical truths, since abstract objects have no causal power to make physical entities behave in such a way.

Why then are the events in our physical universe like this? Why is it the case that the sequence of events can be described by mathematical equations which indicate a high degree of ordering? How could unthinking mindless physical entities and forces have such an orderly behaviour? As Danny Frederick asks, ‘What is to stop some bits of matter moving in ways which are inconsistent with natural laws; or the same piece of matter moving at one time in a way which accords with natural laws but at another time in a way which is inconsistent with them?’ (Frederick 2013, p. 271).

Frederick argues that, while natural laws may be regarded as ceteris paribus rather than exceptionless laws (i.e. they may be default regularities that hold in the absence of outside interference), and while natural laws should be understood as descriptions of what is happening rather than rules for natural objects to follow, nevertheless the question still remains as to how the events in the universe could happen in such a manner describable by natural laws. He notes that statements of natural law are modal descriptions rather than mere descriptions: unlike mere descriptions, modal descriptions describe the limits to what can happen and can be used for prediction. He also observes that it would not help to point out that microphysics shows that the fundamental laws of nature are statistical, for one could then ask how the changes of unthinking physical entities could so arrange themselves over time as to exhibit a probability distribution (ibid.).

The pressing question, therefore, remains: The universe does not have to be like this, but why is it like this? Throughout history, a number of eminent scientists have come to the conclusion that the most plausible explanation is that the universe is the work of a Supreme Intelligent Mind who imposed a rational order onto the mindless physical entities. For example, Einstein writes:

Certain it is that a conviction, akin to religious feeling, of the rationality or intelligibility of the world lies behind all scientific work of a higher order … This firm belief, a belief bound up with deep feeling, in a superior mind that reveals itself in the world of experience, represents my conception of God.Footnote 5

Paul Dirac, one of the pioneering geniuses of quantum theory and a deeply avowed atheist in his younger days, came to acknowledge the plausibility of a Designer after years of research in physics when he says:

It seems to be one of the fundamental features of nature that fundamental physical laws are described in terms of a mathematical theory of great beauty and power, needing quite a high standard of mathematics for one to understand it … One could perhaps describe the situation by saying that God is a mathematician of a very high order, and He used very advanced mathematics in constructing the universe. (Dirac 1963)

4.2.3 Summary

To sum up the views of the scientists cited above, the following features of our universe have been noted:

  1. 1.

    Fine-tuning

  2. 2.

    The existence of orderly patterns of events which can be described by advanced mathematics (see also the discussion of laws of nature in Chap. 2)

In what follows, we shall examine which hypothesis best explains both of these features. It may be that some hypothesis or combinations of hypotheses can explain (1) but not (2), or (2) but not (1), and therefore fail because what needs to be explained are both of these features taken together.

4.3 A Logically Exhaustive List of Categories of Possibilities

In his writings, Richard Dawkins has repeatedly warned of the danger of jumping to the conclusion of design. He cites as example the argument from the apparent design of living organisms, which he thinks is a God-of-the-gaps argument (i.e. an argument based on gaps in our existing knowledge). He argues that in the past it was thought that the improbability of dragonfly’s wing or an eagle’s eye originating by chance implied that these were designed, and that this conclusion resulted from a failure to see the possibility of the alternative explanation of Darwinian evolution. He argues,

After Darwin, we all should feel, deep in our bones, suspicious of the very idea of design. The illusion of design is a trap that has caught us before, and Darwin should have immunized us by raising our consciousness … A full understanding of natural selection encourages us to move boldly into other fields. It arouses our suspicion, in those other fields, of the kind of false alternatives that once, in pre-Darwinian days, beguiled biology. Who, before Darwin, could have guessed that something so apparently designed as a dragonfly’s wing or an eagle’s eye was really the end product of a long sequence of non-random but purely natural causes? (Dawkins 2006, pp. 139, 141)

Dawkins raises an important point. Nevertheless, one should also be careful not to make the fallacious argument that, because many things once thought to be divinely designed actually do have natural explanations, therefore all things have natural explanations. The correct way to proceed is to assess, on a case-by-case basis, which explanation is the best for each case. To assess the case concerning the mathematical describable order of physical entities and to address Dawkins’ concerns, I shall demonstrate that a logically exhaustive list of categories of alternative hypotheses can be devised, and that various objections can be given to rule out each of these categories.

The failure to consider alternative hypotheses is evident in William Dembski’s widely discussed book The Design Inference, in which Dembski attempts to demonstrate that regularity, chance, and design are logically exhaustive and competing modes of explanation. He writes:

Whenever explaining an event, we must choose from three competing modes of explanation. These are regularity, chance, and design. To attribute an event to a regularity is to say that the event will (almost) always happen. To attribute an event to chance is to say that probabilities characterize the occurrence of the event, but are also compatible with some other event happening. To attribute an event to design is to say that it cannot reasonably be referred to either regularity or chance. Defining design as the set-theoretic complement of the disjunction regularity-or-chance guarantees that the three modes of explanation are mutually exclusive and exhaustive. (Dembski 2006, p. 36)

However, Dembski glosses over the possibility that regularity, chance, and design can be combined in various ways, and his subsequent use of his three competing modes of explanation for explaining biological structures has been criticized for ignoring various evolutionary pathways.Footnote 6 Such a pathway has been proposed for cosmology as well (see the discussion of Smolin’s proposal below), and regardless of the merits of this proposal, it is important that this theoretical possibility be considered. Moreover, Dembski fails to consider the option that the event may be ‘Uncaused’, as has been postulated by Hawking for the Big Bang (see Chap. 6). Incomplete considerations of alternative explanations such as Dembski’s serve as a warning that we should be more rigorous in our assessment of alternative explanations with regard to the Teleological Argument. Consider also Monton’s claim that ‘when people observe features of the universe, they sometimes infer that the feature occurred as a result of design, and they sometimes infer that the feature occurred some other way—by chance, necessity, coincidence, unguided natural processes, or what have you’ (Monton 2010, p. 208). The qualifying phrase ‘what have you’ is too slack and does not address the sort of concerns raised by Dawkins.

Various forms of design arguments have been suggested in the literature, for example, significance testing (If E has a low probability and is specified, it is due to intelligent design), inductive sampling, analogical, Bayesian, likelihoodist, and abductive (IBE) (Sober 2019). The problem of unconsidered alternative explanations besets all of them. For example, concerning Bayesianism and Inference to the Best Explanation (IBE), which are widely used by contemporary philosophers, Ratzsch and Koperski (2019) observe,

substantive comparison can only involve known alternatives, which at any point represent a vanishingly small fraction of the possible alternatives. Choosing the best of the known may be the best we can do, but many would insist that without some further suppressed and significant assumptions, being the best (as humans see it) of the (humanly known) restricted group does not warrant ascription of truth, or anything like it.Footnote 7

In response to Craig’s argument that an infinite mind can explain the connections between abstract, physical, and mental which Penrose admits are mysteries, Penrose replies he does not see why an infinite mind is the only solution because there could be other possibilities which we still do not know of and cannot verify. On the other hand, appealing to God can be used to solve any problem, so it is not helpful.Footnote 8

Now I am not claiming that the Teleological Argument must be able to eliminate all the other alternative explanations in order to be of any value. To require the elimination of all the possible alternatives may be too demanding a requirement for reasonable belief, since such a criterion is not fulfilled even by all rational inferences in the natural sciences or in everyday life (Bird 2005, pp. 26–28). Nevertheless, the concerns noted in the preceding paragraphs indicate that it would be desirable if the argument can be made more rigorous such that all the possible alternatives can indeed be eliminated.

The above concerns can be addressed by devising a logical exhaustive list of possible explanations and an exclusion of all the alternative categories of explanations such that the conclusion of design follows logically rather than being merely appealed to solve a problem. Concerning the Teleological argument defended here, the logically exhaustive list of categories of possibilities is demonstrated by the rigorous use of the Law of Excluded Middle, and is as follows:

1. The fine-tuning and order of the universe is either fundamentally Uncaused, or it is fundamentally due to either 1.1, 1.2, or 1.3Footnote 9:

1.1. random cause(s) (‘Chance’).

1.2. non-random cause(s), in which case either.

1.2.1. it is fundamentally due to non-intelligent, non-random cause(s) (‘Regularity’), or

1.2.2. it is fundamentally due to intelligent, non-random cause(s) (‘Design’).

1.3. a combination of random and non-random causes, in which case either.

1.3.1. it is fundamentally due to a combination of non-intelligent, non-random cause(s) + random cause(s) (‘Combinations of Regularity and Chance’), or

1.3.2. it is fundamentally due to a combination of intelligent, non-random cause(s) + random cause(s) +/− non-intelligent, non-random cause(s) (e.g. Evolutionary Creationism: involves a Designer).

2. The fine-tuning and order of the universe is notFootnote 10 fundamentally due to Chance, Regularity, and Combinations of Regularity and Chance, and it is not fundamentally Uncaused.

3. Therefore, the fine-tuning and order of the universe is fundamentally due to Design.

It should be noted that my argument by exclusion does not require ‘perfect’ elimination (‘rule out’) understood as demonstrating that other possible hypotheses have zero probability. It only requires showing that their probability is so low that they can be eliminated as reasonable alternatives to Design even if we assign them very generous probability estimates (see Sect. 7.5), and this is how the ‘not’ in the above syllogism should be understood.

From the above syllogism, it can be seen that all possible hypotheses belong to the following categories: (i) Chance, (ii) Regularity, (iii) Combination of Regularity and Chance (e.g. natural selection + random variation, as in the case of naturalistic evolution), (iv) Uncaused, and (v) Design (the Designer may or may not have used processes such as evolution).

Although each of these categories has been discussed before in the literature, a logical demonstration that these are the only possible categories of hypotheses has not been published before, despite the huge amount of literature on the Teleological Argument over the centuries, hence the unique contribution of this book. It should be noted that such a list can be used for other types of Teleological Argument with respect to other cases of apparent design as well, by simply replacing ‘the existence of mathematically describable order and fine-tuning of the universe’ with other features of apparent design in question. Because of its utility, this list contributes to the discussion of the Teleological Argument in general.

One might raise the worry that new, previously unconsidered hypotheses could all be lumped together in the catch-all basket, and that ‘without knowing the details of what specific unconsidered hypotheses might look like, there is simply no plausible way to anticipate the apparent likelihood of a novel new hypothesis’ (Ratzsch and Koperski 2019). In reply, I shall show that there is an essential feature of each of the categories alternative to design which renders it unworkable as an ultimate explanation for the fine-tuning and order of the universe. As noted earlier, these alternative categories are (i) Chance, (ii) Regularity, (iii) Combination of Regularity and Chance (e.g. natural selection + random variation, as in the case of naturalistic evolution), and (iv) Uncaused. Because the terms chance, random, and the related notion of probability have multiple meanings, I shall first clarify my usage of these terms before evaluating the alternative categories in turn.

Broadly speaking, there are two main concepts of probability: (1) an epistemic notion and (2) a non-epistemic notion, better known as physical probability (Eagle 2019).

(1) The epistemic notion of probability can be further subdivided into objective and subjective interpretations (Holder 2004, p. 74):

(1.1) Objective interpretation of epistemic notion of probability (this includes 1.1.1. classical and 1.1.2. logical/evidentialprobability). This refers to objective evidential support relations (e.g. ‘in light of the relevant seismological and geological data, California will probably experience a major earthquake this decade’) (Hájek 2019). It measures the extent to which the evidence is entailed by the hypothesis (Holder 2004, p. 74, citing Swinburne).

(1.1.1.) The classical interpretation ‘assigns probabilities in the absence of any evidence, or in the presence of symmetrically balanced evidence. The guiding idea is that in such circumstances, probability is shared equally among all the possible outcomes, so that the classical probability of an event is simply the fraction of the total number of possibilities in which the event occurs … for example, the classical probability of a fair die landing with an even number showing up is 3/6’ (Hájek 2019).

A related notion is the Principle of Indifference, which Collins (2009, p. 234) states as follows:

When we have no reason to prefer any one value of a variable p over another in some range R, we should assign equal epistemic probabilities to equal ranges of p that are in R, given that p constitutes a ‘natural variable.’ A variable is defined as ‘natural’ if it occurs within the simplest formulation of the relevant area of physics.

Applying the principle to the argument from Fine-tuning, Collins (2009, p. 234) writes:

Since the constants of physics used in the fine-tuning argument typically occur within the simplest formulation of the relevant physics, the constants themselves are natural variables. Thus, the restricted Principle of Indifference entails that we should assign epistemic probability in proportion to the width of the range of the constant we are considering.

The epistemic probability is argued to be very small, because for a fine-tuned constant C, Wr/WR << 1, where Wr is the width of the life-permitting range of C, and WR is the width of the set of values for which we can make determinations of whether the values are life-permitting or not (Collins 2009, pp. 244, 252). Likewise, Lewis and Barnes (2016, pp. 286-7) reason that, if all we knew was that a certain universe obeyed the laws of nature, without specifying the values of the constants of nature and initial conditions, the probability that that universe would contain life forms is extremely small.

Following Hume, it might be objected that our universe is the only universe of which anyone had experience, invalidating it as the basis of an inductive inference. However, while this universe is the only one we experienced, we can still think about how it could have been different. Ratzsch and Koperski (2019) observe:

If we let C stand for a fine-tuned parameter with possible values in the range [0, x], and if we assume that nature is not biased toward one value of C rather than another such that each unit subinterval in this range should be assigned equal probability, then fine-tuning is surprising insofar as the life-permitting range of C is tiny compared to the full interval, which corresponds to a very small probability.

Critics accuse the Principle of Indifference of extracting information from ignorance, and argue that in a state of ignorance, it is better to assign imprecise probabilities or to eschew the assignment of probabilities altogether (Hájek 2019).

In reply, concerning the problem of assigning prior probability of the constants and initial conditions of a given theory (e.g. the probability of a constant having a value in a certain small range, without any knowledge about our universe), Lewis and Barnes (2016) note that ‘we cannot calculate the posterior at all without some estimate of the prior probability’ (p. 287). However, this is not a big problem because ‘if our data are very good, then our conclusions won’t depend much on the prior probability’ (p. 288). In fine-tuning cases, ‘the speed and severity with which disaster strikes as one tiptoes through parameter space show that the probability of a life-permitting universe, given the laws but not the constants, will be very small for any honest (and non-fine-tuned!) prior probability’ (ibid.).

In other words, if there are some factors which we are ignorant of which entail that the probability is not small (a concern raised in Hossenfelder 2019), those factors would need to be ‘fine-tuned’.

(1.1.2) Logical theories of probability allow for the possibilities to be assigned unequal probabilities depending on the evidence (Hájek 2019). While the best beliefs to have are those that are logically probable on our rightly basic beliefs, to the extent to which an investigator’s standards are close to the correct ones, he/she will use rightly basic beliefs and logical probability (Holder 2004, pp. 75–76).

(1.2.) Subjective interpretation of epistemic notion of probability (subjective probability). This refers to an agent’s degree of confidence referring to a graded belief (e.g. ‘I am not sure that it will rain in Canberra this week, but it probably will’) (Hájek 2019).

(2) Non-epistemic notion of probability, also known as physical probability (this includes the frequentist, propensity, and best-system interpretations): this applies to various systems in the world, independently of what anyone thinks (Hájek 2019). The frequentist interpretation relates to the outcome of many trials of an experiment, such as many tosses of a fair coin (Holder 2004, p. 73). Whereas the propensity interpretation refers to

the extent to which one or more events cause another event. The outcome of my toss of a coin may be determined completely by the impulse I impart to it, the angle at which my thumb strikes it, the atmospheric conditions at the time, and so on; and so the coin may have a physical probability of 1 of landing heads on a particular toss. Indeed, if determinism were true all physical probabilities would be 0 or 1. Most physicists, however, believe that quantum theory is ontologically indeterminate and so the physical probability of a quantum event, such as the radioactive decay of an atom within a certain time, has a physical probability between 0 and 1. (Ibid.)

An example of the best system interpretation is ‘the Mentaculus’, which attempts to provide a complete probability map of the universe (see Chap. 2).

Evaluation of different interpretations of probability:

As noted above, there are different interpretations of probability which are suited for different contexts of discussions. Which of the above interpretations is suitable for discussing the probabilities of the hypotheses concerning the fine-tuning and order of the universe in the context of the argumentation of this book?

The non-epistemic notion of probability (physical probability) is not appropriate, because according to the standard view of physical possibility, ‘alternative physical laws and constants trivially have physical probability zero, whereas the actual laws and constants have physical probability one’ (Friederich 2018).Footnote 11

The subjective epistemic notion is also not appropriate, because the arguments in this book do not concern the psychological state of any particular individual, but the state of the universe.

Therefore, an objective epistemic notion of probability is the only appropriate one for the purposes of this book. I will be using both the classical interpretation and the logical/evidential interpretation where appropriate. In particular, by arguing that there are essential properties of each of the alternative hypotheses to design which render it unlikely, I will be attempting to construct logical probabilities concerning that hypothesis and showing that the probability is low on the basis of evidence.

Broadly speaking, there are two main concepts of ‘random’:

(1) An epistemic notion: referring to those processes whose outcomes we cannot know in advance, that is, unpredictable (Eagle 2019).

(2) A non-epistemic notion: the non-epistemic notion may be subdivided as follows:

(2.1) A non-epistemic notion used to characterize the disorder and patternlessness of an entire collection of outcomes of a given repeated process. On Eagle (2019)‘s conception,

randomness indicates a lack of pattern or repetition … randomness is fundamentally a product notion, applying in the first instance to sequences of outcomes, while chance is a process notion, applying in the single case to the process or chance setup which produces a token outcome … randomness is indifferent to history, while chance is not. Chance is history-dependent.

On the basis of this conception, he argues that there are counterexamples to the Commonplace Thesis (CT) ‘Something is random iff it happens by chance.’ One interesting potential counterexample involves coin tossing. ‘Some have maintained that coin tossing is a deterministic process, and as such entirely without chances, and yet which produces outcome sequences we have been taking as paradigm of random sequences’ (ibid.). Eagle (2019) also argues it is possible for a chancy and indeterministic process to produce a non-random sequence of outcomes.

(2.2.) A non-epistemic notion used to characterize a process. Eagle (2019) notes that some philosophers deliberately use ‘random’ to mean ‘chancy’ and acknowledges that this process conception of randomness is perfectly legitimate, but complains that it makes the Commonplace Thesis a triviality and does not cover all cases of randomness.

Eagle notes that some have defined randomness as indeterminism, but this view

makes it difficult to understand many of the uses of randomness in science … This view entails that random sampling, and random outcomes in chaotic dynamics, and random mating in population genetics, etc., are not in fact random if determinism is true, despite the plausibility of their being so. It does not apparently require fundamental indeterminism to have a randomized trial, and our confidence in the deliverances of such trials does not depend on our confidence that the trial design involved radioactive decay or some other fundamentally indeterministic process. Indeed, if Bohmians or Everettians are right (an open epistemic possibility), and quantum mechanics is deterministic, the view that randomness is indeterminism entails that nothing is actually random, not even the most intuitively compelling cases. (Ibid.)

Hence, Eagle concludes that the view that randomness is indeterminism should be rejected (ibid.).

The term ‘chance’ also has a variety of meanings:

(1) Epistemic notion:

(1.1) Synonymous with an epistemic notion of random, that is, unpredictable. ‘Something that happens unpredictably without discernible human intention or observable cause, e.g. “Which cards you are dealt is simply a matter of chance”’ (Merriam-Webster Dictionary, definition 1a)

(1.2) Synonymous with an epistemic notion of probability. ‘The possibility of a particular outcome in an uncertain situation … the degree of likelihood of such an outcome e.g. a small chance of success’ (Merriam-Webster Dictionary, definition 4)

(2) Non-epistemic notion: chance is often used synonymously with physical probability (Eagle 2019). It is also used for the juxtaposition of unrelated causal trajectories (e.g. car crashes, when two people meet by accident) (Ellis 2018).

Evaluation of different interpretations of ‘random’ and ‘chance’.

As noted above, there are different interpretations of ‘random’and ‘chance’ which are suited for different contexts of discussions. Which of the above interpretations is suitable for the use of these terms in my syllogism demonstrating the logically exhaustive list of categories of possibilities as explained above?

The epistemic notion is not appropriate: the syllogism is not referring to what we can predict, but what is the case. The definition of randomness as indeterminism is also inappropriate, because of the reasons Eagle explained (see above). Rather, by using the term ‘random causes’ in my syllogism and labelling this as the ‘Chance hypothesis’, I intend to represent a common usage in the scientific literature relevant to certain forms of hypotheses which have been postulated as possible explanations for ‘fine-tuning’, such as the inflationary cosmology and multiverse scenarios. For example, cosmologist Andreas Albrecht writes,

One typically imagines some sort of chaotic primordial state, where the inflation field is more or less randomly tossed about, until by sheerchance it winds up in a very rare fluctuation that produces a potential-dominated state … Inflation is best thought of as the ‘dominant channel’ from random chaos into a big bang-like state. (Albrecht 2004, pp. 384-5; italics mine)

The above description by Albrecht uses the terms ‘random’ and ‘chance’ as a non-epistemic notion to characterize something that brought about (i.e. caused) a fluctuation resulting in a bigbang-like state. In other words, ‘random’ and ‘chance’ is used in a non-epistemic sense to describe causes that bring about a variety of outcomes with varying degree of order and/or specificity. This definition of ‘random’ and ‘chance’ is compatible with determinism (and indeterminism); if determinism is true, the varying outcomes are determined to the varying conditions of the cause(s); if indeteminism is true, a cause in the exact same condition may produce different outcomes. To hypothesize that a causal process produced multiple universes such that one that is fine-tuned resulted by chance is analogous to saying that in a game a machine randomly tossed three fair dice multiple times such that this process resulted in the winning ordered combination of ‘triple six’ by chance.

By using the terms ‘random’ and ‘chance’ I am not attempting to discuss the Commonplace Thesis nor to cover all cases of randomness and chance, nor am I using the term ‘chance’ as ‘physical probability’ in my syllogism, as Eagle does in his article. Hence, my use of the term ‘Chance hypothesis’ to label ‘random causes’ is not susceptible to Eagle’s objections to the process notion of randomness noted above.

In summary, I am using the term ‘random’ in ‘random causes’ (and labelling this as the Chance hypothesis) in a non-epistemic sense to describe causes that bring about a variety of outcomes with varying degree of order and/or specificity. This contrasts with the ‘Regularity’ hypothesis, whereby causes bring about outcomes that are not varied, and the ‘Design’ hypothesis, whereby causes have freedom to intentionally bring about outcomes which may be varied or not varied and for a purpose (cf. Dawes 2007, p. 73, who defines ‘design’ to mean ‘the work of some intentional agent acting purposefully’). To evaluate whether each of the five hypotheses—(i) Chance, (ii) Regularity, (iii) Combination of Regularity and Chance, (iv) Uncaused, and (v) Design—is true on the basis of evidence, I will be using probability in an epistemic objective sense.

I shall now proceed to evaluate the various categories of hypotheses, starting with the Chance hypothesis.

4.4 Chance Hypothesis

4.4.1 The Argument from Selection Bias and Chaos

With regard to the mathematically describable order of our universe, Wenmackers (2016, p. 10) objects that it may just be due to our selection bias, for the majority of possible mathematical variations are not applicable to our world in any way (p. 10). Moreover, we can never be sure that the application of mathematics to the world is perfect, since empirical precision is always limited. Wenmackers note the objection that the fact that there is some part of mathematics at all that works well requires explanation, even if this does not constitute all or most of mathematics (pp. 10–11). Wenmackers replies that the alternative case in which no mathematics would describe anything in the universe and a world in which processes cannot be summarized or approximated in a meaningful way would not help us to have evolved in this world (Wenmackers 2016, pp. 10–14).

However, the question is, why our world should be such that allows for evolution? As Einstein argues,

A priori, one should expect a chaotic world, which cannot be grasped by the mind in any way … Even if man proposes the axioms of the theory, the success of such a project presupposes a high degree of ordering of the objective world, and this could not be expected a priori. That is the ‘miracle’ which is being constantly reinforced as our knowledge expands. There lies the weakness of positivists and professional atheists. (Goldman 1997, p. 24)

Wenmackers (2016, p. 13) objects by claiming that

random processes are very well-behaved: they consist of events that may be maximally unpredictable in isolation, but collectively they produce strong regularities. It is no longer a mystery to us how order emerges from chaos. In fact, we have entire fields of mathematics for that, called probability theory and statistics, which are closely related to branches of physics, such as statistical mechanics.

However, the randomness that she is referring to is epistemic (‘may be maximally unpredictable’). In actuality, the so-called chaos has a high degree of underlying order which is described by the complex equations formulated by statisticians (Bishop 2017). Likewise, the so-called self-organization process (e.g. crystallization) which describes overall order arising from interactions between apparently disordered parts has a high degree of underlying order involving the interactions. The question posed by Einstein is, why should there be any high degree of ordering at all? (One might reply that the high degree of ordering is explained by another level of ordering; this possibility is discussed under the Regularity hypothesis in Sect. 4.5, and also under the Uncaused hypothesis in Chap. 7)

Steiner (1998, pp. 24-26) observes that, in order for mathematics to be applicable for predicting observations of physical entities, the properties of physical entities must remain reasonably stable over time. For example, there are four coins in my pocket, after removing two coins, I should have two coins left, but if the coins are unstable such that they disintegrate very quickly, I would not observe two coins when I check my pocket. ‘The number of coins in my pocket … stay constant long enough for humans to count them … The coins in my pocket are usually the same whether or not I walk around the house, put candies in my pocket, too, and so forth’ (p. 26). What explains this stability over time? Various properties of a particle, for example, could have changed so quickly that makes mathematical predictions impossible. While one might suggest that there could have been various constraints that prevent the existence of the alternative disordered schemes, the question remains as to why the constraints should exist in such a well-ordered way that resulted in the mathematically describable behaviour.

Genuine randomness is extremely improbable as a causal explanation for the order noted above in view of the fact that one could conceive of a potentially infiniteFootnote 12 number of alternative ways in which the behaviour of mindless physical entities in the universe is disordered. A particle, for example, could have moved in billionsFootnote 13of alternative directions at every moment, other than consistently in the direction describable by any form of mathematical equation. As noted earlier, ‘random causes’ is supposed to describe causes that bring about a variety of outcomes with varying degrees of order and/or specificity, without favouring one rather than the other alternatives. Thus, following the Principle of Indifference, if the universe was fundamentally brought about by random causes, then each one of the billions of possible ways of the behaviour of mindless physical entities in the universe should be assigned equal probability. This means the probability of any one of them—including the probability that it moves consistently in the direction describable by any form of mathematical equation—is extremely low. Against the criticism that the Principle of Indifference extracts information from ignorance, it can be replied that, if there are some factors which we are ignorant of which entail that the probability for mathematically describable order is not small, those factors would need to be ‘fine-tuned’ (i.e. ordered by regularity, regularity and chance, design, or combinations of these; see below); it would not be purely random.

Finally, Wenmackers’ argument from selection bias and chaos does not explain the fine-tuning of the universe (nor is it intended to).

4.4.2 Anthropic Principle

With regard to Fine-tuning, some scientists deny the conclusion of design by arguing that, if these conditions were not ‘fine-tuned’, we won’t be here to observe them; since we are here, we should not be surprised about the fine-tuning.

However, this reply is too superficial. Philosopher John Leslie provides the analogy of a criminal who was dragged before a firing squad of 100 trained marksmen, all of whom missed when the command to fire was given and the criminal found himself alive. It would be ridiculous for the criminal to think that ‘since I am still alive, I should not be surprised that all of them missed!’ (Leslie 1982, p. 150). On the contrary, the observation that all the marksmen missed requires an explanation other than chance. Perhaps the 100 marksmen had conspired to spare him, or perhaps it was a miracle; in any case, it is unreasonable to attribute his survival to chance.

Sober (2019, p. 73) claims that the fine-tuning case and the firing squad case differ by arguing that, in fine-tuning, the sequence is as follows: t1: constants are set, t2: you are alive, t3 you observe you are alive; while in the firing squad case, t1: firing squad decides, t2: you are alive (just before they fire), t3: you observe you are alive. Sober claims that, in the case of fine-tuning, if you are alive at t2, the constants must be right at t1, t2, and t3; thus, the probability of your observing at t3 that the constants are right is the same regardless of whether it was God or chance that set the values of the physical constants at t1. However, in the case of the firing squad, if you are alive at t2, that leaves open what the firing squad decided at t1 what it will do just after t2; thus, your observing at t3 that you are alive provides evidence about the squad’s decision at t1. Thus, the fact that you are alive at t2 induces an Observation Selection Effect in the fine-tuning case but not in the firing squad case. Nevertheless, this still does not explain why are the constants right at t1. As argued previously, why the constants are right at t1 still requires a reasonable explanation other than chance.

4.4.3 Improbable Event Happens

A sceptic might object that even though the apparent probability of a fine-tuned and ordered universe occurring by chance is outrageously tiny, it still could have happened by chance. After all, improbable events happen all the time. For example, the probability of someone winning a lottery involving thousands of participants is outrageously tiny, but still it happened. The probability of clouds, snowflakes, and so on taking the particular beautiful forms that they do is outrageously tiny and these forms may appear to be designed, but we know that they are the result of natural forces.

In response, the cases cited above are disanalogous to the case concerning order and fine-tuning. In a lottery all the participants are equally qualified to win. Likewise, among the millions and millions of possible forms which clouds, snowflakes, and so on can take, a large proportion of them are ‘suitably qualified’ to appear beautiful or take a certain recognizable pattern or another—this is called pareidolia: a common psychological phenomenon. By contrast, it is not the case that all the values or a large proportion of values among the billions of possible valuesFootnote 14 which (say) those physical constants can take would have ‘qualified’ to allow for life after the Big Bang. On the contrary, the proportion of possible values which would allow for life is extremely small; as explained above, the overwhelming majority of possible values would not allow for any form of life at all—indeed, they would be devoid of structure and pattern. (Lewis and Barnes 2016, p. 164: ‘Particles spend their lives alone, drifting through emptying space, not seeing another particle for trillions of years and even then, just glancing off and returning to the void.’) As explained earlier, an explosion such as the Big Bang would most likely have resulted in disorder and debris, rather than a universe which expands for billions of years and which allows life to originate and survive. Similar to the scenario of the 100 marksmen who missed the criminal, survival in such circumstances requires an explanation other than chance. Likewise, it is not the case that each possible behaviour of particles among the billions of possible behaviours would have resulted in a consistently mathematically describable order. On the contrary, as explained earlier, the proportion of such possible behaviours is extremely small.

The above observations illustrate the fact that we are not just talking about improbable events, but an event which is improbable and has a specificity, that is, a universe that is highly ordered and which has the capacity for allowing the production of functional objects, in particular embodied intelligent life. The idea of specificity can be illustrated by the analogy of an archer who shoots arrows at a wall. After the event,

she could make herself appear to be a skilled archer by simply painting bull’s-eyes around whatever places on the wall an arrow falls. But the pattern thus created would not be a specification; it would be a fabrication. If the bull’s-eye already exists, on the other hand, and she sets out to hit it and succeeds, it represents a specification. (Dawes 2007, p. 71, citing Dembski)

The idea of painting a bull’s-eye around wherever the arrow falls is analogous to whoever is the winner in the lottery case. In this case, any place on the wall has equal chance of being the bull’s eye of an arrow shot randomly, just as any participant in the lottery has equal chance of being the winner. By contrast, it is not the case that any of the possible values of those physical constants allows for life; on the contrary, the vast majority of possible values do not allow for life, and the range of possible values that allow for life is extremely small; a small deviation from the existing values would result in a lifeless universe (thus, the values are highly specified in this sense). To fall within such a small range which (unlike the rest of the range of possible values) allows for life would be analogous to falling within a small region of the wall which (unlike the rest of the wall) has been marked out as the bull’s eye before the arrow is shot.

Moreover, the features of ‘being highly ordered and allowing for the production of functional objects such as embodied intelligent life’ are ‘special’ because:

(1) Functionality is often associated with design (Ratzsch and Koperski 2019; although as noted at the end of this section I do not claim that this type of specified complexity by itself is a reliable criterion for detecting design). To illustrate, if one were to discover in the midst of a jungle a structure which has the capacity for allowing the production of motorcars, one would reasonably conclude that it was designed. The reason is because it is unreasonable to think that the components of this structure were fundamentally brought together and assembled by Chance, Regularity, or Combinations of Regularity and Chance, or that the structure began to exist Uncaused, and (as shown above) the only remaining explanation is Design. It is true that there are also other arrangements of the components of the structure which are very unlikely. Nevertheless, the overwhelming proportion of the possible arrangements of the components (e.g. wiring not attached to assembly line, door panels not fitting the vehicle frame, etc.) would not allow for the production of anything functional. Therefore, the arrangement of the components which allow for the production of motorcar is ‘special’ and warrants an explanation. Likewise, as implied by the discussion in Sect. 4.1, the overwhelming proportion of possible universes would not allow for the production of functional objects such as living cells. Thus, the fact that our universe allows for the production of living cells warrants an explanation.

It might be objected that, unlike the structure (factory?) which allows for the production of motorcar, our universe does not seem to be organized towards producing life; indeed, most parts of our universe are inhospitable to life, and hence are not specified or functional in the same sense as the components of the structure. On the other hand, Carroll objects that our universe is too fine-tuned for life. He writes,

If the reason why certain characteristics of the universe seem fine-tuned is because life needs to exist, we would expect them to be sufficiently tuned to allow for life, but there’s no reason for them to be much more tuned than that. The entropy of the universe, for example [seems] much more tuned than is necessary for life to exist …. [F]rom purely anthropic considerations, there is no reason at all for God to have made it that small. (Carroll 2016, p. 311)

I shall discuss the objection concerning inhospitality towards life in greater detail in Sect. 7.3. At this point I would like to highlight the fact that, while it is true that our universe is not fully analogous with the factory-like structure, there is nevertheless a point of analogy, namely, just as the overwhelming proportion of the possible arrangements of the components would not allow for the production of anything functional, the overwhelming proportion of possible universes would not allow for the existence of functional objects such as living cells. The relevant sense of specificity is that in both cases the extremely narrow range of possibilities that allow for the existence of functionality is somehow actualized.

Contrary to Carroll, this relevant sense of specificity does not require the fine-tuning to be solely for the existence of life, rather than (say) for the existence of life and other features such as (for example) certain aesthetic features of our universe. Hence, Carroll’s objection is based on a mistaken assumption. Barnes (2019) replies that

low entropy initial conditions over the observable universe (as opposed to merely in our Solar System, for example) are necessary for our beautiful night sky, from what we see with our naked eye to our biggest telescopes. On a clear night, far away from city lights, try staring deeply into the Milky Way for a while and see if you’re compelled to shout, ‘not worth it!’

(2) Embodied intelligent living things can have plenty of meaningful physical interactions with one another and can be aware of God and can ‘communicate and establish a deep relation of love with God, if God exists at all … Intelligent life can actualize moral values in the world’ (Chan and Chan 2020, p. 8).Footnote 15 Thus, if a good God exists, ‘God would have good reason to create intelligent lives (as well as a universe in which intelligent lives can emerge and flourish’ (ibid.).

Sinhababu (2016) offers an objection to the fine-tuning argument for God’s existence by suggesting the metaphysical possibility of alternative psychophysical laws that permit a wider range of physical entities to have minds, such that ‘Whenever two electrons were a prime number of centimeters apart, they could have the mental states involved in heartfelt communication about their histories. Every subsequent time they were a whole number of meters apart, they could fondly remember each other’ (p. 425). He argues that such psychophysical laws are possible if a non-physical God having a Mind is possible (pp. 426–427).

However, the point remains that, if the universe is not fine-tuned, the universe would be deprived of physical interactions with particles ‘drifting through emptying space, not seeing another particle for trillions of years and even then, just glancing off and returning to the void’ (Lewis and Barnes 2016, p. 164). While God can create alternative psychophysical laws or disembodied intelligent beings (e.g. angels), that still does not answer the question ‘Why our physical universe is so special, that is, allowing for so many physical interactions and highly ordered?’ In a similar vein, Hawthorne and Isaacs (2018, pp. 147-148) respond to the objection that there is no special expectation that God would make physical life rather than non-physical life by arguing that this objection does not actually make much of a difference to the fine-tuning argument, because the fact is that there is physical life which is more likely given theism than atheism.

Accepting the conclusion that specified events with extremely low probability happened as a result of chance is unreasonable. Are we seriously going to believe that the 100 marksmen missed by chance? Consider also the case of suspected plagiarism in which two essays submitted to a professor by two different students are word-for-word identical. It is very improbable that such ‘specified’ events happen by chance. While there are other arrangements of the words of the essays which are also very unlikely, the overwhelming proportion of the possible arrangements of the words would result in essays that are not identical, rather than two essays that are word-for-word identical. Hence, most professors would rightly insist on investigating for plagiarism.Footnote 16 Yet the improbability of a highly ordered and life-permitting universe is far greater than these examples! While we can imagine that specified events with extremely low probability (e.g. the case of suspected plagiarism) happened as a result of chance, we should regard such conclusions as belonging only to the imagination but not to reality.

It should be noted that, while my argument here makes use of ‘specified complexity’ to argue against the Chance hypothesis, I do not claim (as Dembski does) that specified complexity by itself is a reliable criterion for detecting design (Dembski 2002, p. 24). One of the main criticisms against Dembski’s use of the idea of specified complexity is that critics object that counterexamples from evolutionary biology can be found. However, my book does not make this claim. Indeed, I think that specified complexity by itself is not a reliable criterion for detecting design because additional arguments need to be provided to rule out other alternatives to design (such as the evolutionary alternative; see below), and I provide such arguments in what follows. Thus, my book avoids the criticism against Dembski.

4.4.4 The Problem of Normalizing Probabilities

Against conceptual probability, it has been objected that, from a logical point of view, the full interval of the possible values of the fine-tuned parameter is from 0 to ∞, and since the range is infinite, there is no sense in which life-friendly universes are improbable; the probabilities are mathematically undefined (McGrew, McGrew, and Vestrup 2001).

Lewis and Barnes (2016, p. 286) reply that ‘these kinds of “what to do with infinity” problems are often encountered in the physical sciences, especially in cosmology, and so these objections cannot succeed against fine-tuning without paralyzing probabilistic reasoning in all of physics’. Ratzsch and Koperski (2019) propose:

One solution to this problem is to truncate the interval of possible values. Instead of allowing C to range from [0, ∞), one could form a finite interval [0, N], where N is very large relative to the life-permitting range of C. A probability distribution could then be defined over the truncated range … The argument for fine-tuning can thus be recast such that almost all values of C are outside of the life-permitting range. The fact that our universe is life-permitting is therefore in need of explanation.Footnote 17

It should be noted that the fine-tuning argument concerns the concrete universe, not abstract logically possible worlds. Collins (2009, p. 249) argues that, where our concrete physical universe is concerned, the range of the possible values of the fine-tuned parameter is not infinite, noting that ‘the so-called Plank scale is often assumed to be the cutoff for the applicability of the strong, weak, and electromagnetic forces’ (see also the argument against concrete infinities in Loke (2012b; 2017a, chapter 2)). Therefore, ‘the limits of our current theories are most likely finite but very large, since we know that our physics does work for an enormously wide range of energies. Accordingly, if the life-permitting range for a constant is very small in comparison, then … that there will be fine-tuning’ (Collins 2009, p. 249.).

4.4.5 Multiple Universes

4.4.5.1 Introducing Various Types of Multiverse Hypothesis

Many scientists have suggested that perhaps there are many universes which have been formed, such that eventually one that is fine-tuned would be formed by chance. Collins (2009, p. 257) explains: ‘Just as in a lottery in which all the tickets are sold, one is bound to be the winning number, so given a varied enough set of universes with regard to some life-permitting feature F, it is no longer surprising that there exists a universe somewhere that has F.’ The multiverse hypothesis is often combined with the anthropic principle to suggest that, given a large variety of universes, ‘it is neither surprising that there is at least one universe that is hospitable to life nor—since we could not have found ourselves in a life-hostile universe—that we find ourselves in a life-friendly one’ (Friederich 2018). Some have used the concept of infinity to postulate a spatially infinite universe or an infinite number of universes, given which anything that is possible would happen. Somewhere in such an infinite universe/infinite number of universes, there would be regions exhibiting some degree of order, and since life cannot exist where there is no order, we will find ourselves in one of those regions with order.

There are different types of multiple universes theories: some postulate the simultaneous existence of many universes (spatial multiverse theories), others postulate one universe arising after another consecutively (temporal multiverse theories) (Gale 1990). Various philosophical postulations and scientific mechanisms have been proposed for various multiverse theories. For example, while most philosophers accept the use of the language of possible worlds as a way to talk about necessity and possibility (modal logic), philosopher David Lewis speculates that all possible worlds exist concretely (modal realism) (Lewis 1986). Hugh Everett’s Many Worlds interpretation of quantum mechanics has also been used to postulate the existence of infinite branches of spacetime (parallel worlds) resulting from quantum splitting; this interpretation of quantum theory has been used by some cosmologists to explain the cosmic coincidences (Holder 2004, pp. 52–53). Many physicists have suggested that the process of inflation resulted in causally isolated spacetime regions (‘island universes’), and that the process is ‘eternal’ in the sense that the formation of island universes never ends, resulting in the production of an infinite number of island universes (Vilenkin and Tegmark 2011, citing Guth 2000).

It should be noted that the postulation of a multiverse per se is not contrary to theism, for it is possible that God created a multiverse (call this the ‘theistic multiverse hypothesis’). Thus, proving the existence of more than one universes per se will not refute theism. However, the use of the postulation of multiverse by atheists to explain away God/Designer (i.e. claiming that the fine-tuning and order of our universe can be explained by the multiverse such that there is no need for a designer; call this the ‘atheistic multiverse hypothesis’) is beset with several problems, which I shall explain below.

4.4.5.2 Insufficient Evidence for the Atheistic Multiverse Hypothesis

On the one hand, there is insufficient reason or evidence for thinking that any of the atheist multiverse scenarios is true. Concerning Lewis’ modal realist hypothesis, by speculating that all possible worlds exist concretely, Lewis is no longer talking about possible worlds as such; rather, he is speculating that the actual world is far more extensive than we thought. In other words, if we found out that his hypothesis is true, ‘we would simply have learned that the actual world is richer than we thought—that it contains all of these island universes’ (Pruss 2009, p. 36, attributing to Van Inwagen). However, there is no good evidence which shows that such concrete worlds really exist. As for Everett’s interpretation, it is not proven as well; there are other possible alternative deterministic interpretations of quantum physics such as Bohm’s pilot-wave model (see Chap. 2). On the other hand, Everett’s interpretation (according to which every possibility is actual) is beset with the so-called measure problem (see below).Footnote 18

While some evidence for inflationary cosmology (which is claimed to have brought about multiverses) has been proposed, this has been disputed by other cosmologists, and the problem with testing multiverse hypothesis remains (Friederich 2018). It should be noted that the so-called Eternal Inflation Model explained by Vilenkin and Tegmark (2011) does not mean eternal in the past without a beginning; rather, it is postulated to be eternal in the future in the sense that it has no end. In fact, Vilenkin (2015) himself argues for an ultimate beginning of the universe, thus accepting premise 2 of Craig’s formulation of the Kalām Cosmological Argument, namely, ‘The Universe began to exist.’ Given that an actual infinite regress of events is impossible (see Chap. 5), it must still be finite in the past in the sense of having a first event.

Moreover, the claims that ‘In an eternally inflating universe, anything that can happen will happen; in fact, it will happen an infinite number of times’ and ‘inevitably, an unlimited number of bubbles of all possible types will be formed in the course of eternal inflation’ (Vilenkin and Tegmark 2011) are based on the assumption that the future is an already existing actual infinite rather than a potential infinite. However, the assumption that it is an actual infinite is unproven and falsified by Mawson’s argument and by other arguments discussed in Chap. 5; thus, the future (if it is indeed infinite) should be regarded as a potential infinite.Footnote 19 Vilenkin and Tegmark (2011) state: ‘that’s how we test any scientific theory: we assume that it’s true, work out the consequences, and discard the theory if the predictions fail to match the observations.’ Mawson’s argument explained below does just that: it shows how the prediction of ‘anything that can happen will happen’ fails to match the observations. Claiming that inflation can stretch continuous space indefinitely does not imply that an actual infinite is actually reached. As Ellis et al. (2004, p. 927) note, ‘Future infinite time also is never realized; rather, the situation is that whatever time we reach, there is always more time available’ (see Chap. 5). Indeed, more recently, Tegmark himself has advocated the rejection of the actual infinite because of the so-called measure problem (see Sect. 4.4.5.3 below).

Some purported evidence of multiple universes (e.g. claims of universes collisions leaving behind ‘scars’ on the CMB; this has been disputed by other scientists, as noted in Chap. 2), even if confirmed, only implies that there is more than one universe but does not imply that there is an infinite number or a large number of them. It should be noted that, in order for the multiverse hypothesis to explain the fine-tuning and order of our universe, a huge number of varied universes would be required, but there is no conclusive evidence that such a huge number of varied universes exist. The evidence for inflation does not by itself imply the evidence for an actual infinite number of universes, as illustrated by cosmologist George Ellis’ (2007, Sect. 2.8) acceptance of the former but rejection of the latter (see below).

4.4.5.3 Arguments against the Atheistic Multiverse Hypothesis

On the other hand, there are powerful scientific and philosophical objections against the atheistic multiverse hypothesis.

First, currently popular ‘multiverse’ scenarios which suggest the formation of baby universes that eventually become causally independent of the mother universe are contrary to the Generalized Second Law of Thermodynamics (Curiel 2019, citing Wall 2013a, 2013b).

Second, Ellis (2007, Sect. 9.3.2) observes that ‘the concept of infinity is used with gay abandon in some multiverse discussions, without any concern either for the philosophical problems associated with this statement’ (Ellis 2007, Sect. 8.1). Recall the discussion on multiverse mentioned earlier whereby some have postulated an actual infinite number (or a very large number) of universes to explain the fine-tuning of the universe. Following philosopher Tim Mawson, one can object that, on such a hypothesis in which every possibility (or very large number of possibilities) is actual, the probability of any universe in which we can more or less continually and consistently understand through induction is infinitely (or extremely) small. The reason is because at every moment there would be (roughly speaking) an infinite (or very large) number of ways in which things ‘go wrong’ with respect to our beliefs arrived at by induction and only one way in which things ‘go right’.Footnote 20 Yet the mathematically describable order of our universe indicates that our universe is one in which we can more or less continually and consistently understand through induction.

One might reply by arguing that the probability of such a universe is indeed infinitely (or extremely) small, but because an ordered universe is necessary for the survival of life, we would still find ourselves in such a universe due to the anthropic principle. However, the survival of life would only require us to live in an ordered universe up to this present moment. There are an infinite number of ways the next moment might go wrong. But as I am typing this, the next moment has arrived and this has gone right in spite of its infinitesimal small probability if there were an infinite number of universes. Thus, it is far more likely that there isn’t an infinite/large number of universes. As Holder (2004, p. 126) notes regarding the problem concerning the persistence of order in this universe,

presumably in an infinite ensemble of possible universes, many will be identical to ours up to, say, the present moment or midnight on 31 October 2008, and then dissolve into chaos … imagine a monkey sitting at a typewriter for untold aeons. The animal is vastly more likely to produce ‘To be or not to be’ at some stage and then sink into chaos than to produce the whole of Hamlet. Similarly, random selection of universes from a vast ensemble is far more likely to produce a solar system embedded in chaos, or a finely-tuned epoch followed by chaos, than a universe with the order, and persistence of that order, which our universe actually possesses.

Indeed, more recently, cosmologist Max Tegmark (who had earlier advocated an actual infinite eternal universe scenario, as noted in Chap. 4) has advocated the rejection of the infinite because of the so-called measure problem, which he calls ‘the greatest crisis facing modern physics’. The problem is that, if inflationary cosmology were to result in an actual infinite number of universes, then ‘whatever experiment one makes … there will be infinitely many copies of you … obtaining each physically possible outcome … So, strictly speaking, we physicists can no longer predict anything at all!’ (Tegmark 2015). However, we do live in a universe in which physicists can predict many events. Therefore, the antecedent is false.

Third, the atheistic multiverse scenario faces the Boltzmann Brain problem. Collins explains,

This is the problem that, under naturalistic views of the mind, it is enormously more likely—on the order of 1010(123) times more likely—for observers to exist in the smallest bubble of order required for observers, than in a universe that is ordered throughout. (The order being referred to here is measured by entropy—the lower the entropy, the higher the order.) Yet, we do not exist in a bubble of low entropy, but in a universe with low entropy throughout. (Collins 2018, pp. 90-91)

Craig (2012) notes that ‘appeal to an observer self-selection effect accomplishes nothing because … most observable worlds will be Boltzmann Brain worlds’.

In other words,

  1. 1.

    If atheist multiverse scenario is true, it is overwhelmingly probable that we would observe that we are isolated brains surrounded by thermal equilibrium. (Prediction)

  2. 2.

    We do not observe that we are isolated brains surrounded by thermal equilibrium.

  3. 3.

    Therefore, it is overwhelmingly probable that the atheist multiverse scenario is false. (Adapted from Lewis and Barnes 2016, pp. 317–318)

Lewis and Barnes (2016, p. 322) note: ‘The multiverse has a tightrope to walk. Too few varied universes, and it will probably fail to make a life-permitting one at all. Too many non-fine-tuned universes, on the other hand, could result in a universe filled with Boltzmann Brains.’ For the multiverse to walk this tightrope, it would need to be fine-tuned (ibid.). In other words, those life-permitting multiverse scenarios which are supposedly able to avoid the Boltzmann Brain problem would themselves require fine-tuning, and therefore they are not (by themselves) the ultimate solution to the fine-tuning problem.

Fourth, even if there are many universes, the process which led to their formation (whether involving string theory or not; see Sect. 4.5) would itself require fine-tuning in order to stably generate so many different kinds of universes (and ensure that they do not face other problems such as colliding and destroying one another), such that eventually one that is ‘fine-tuned’ (and describable by highly sophisticated mathematical equations) is generated by chance. As Collins (2018, p. 90) notes, ‘anything that produces such a multiverse itself appears to require significant fine-tuning.’

As an illustration, consider the ‘famous fine-tuning problem of inflation’. Lewis and Barnes (2016, pp. 172-173, citing Neil Turok) explain that in order for any form of life to exist in our universe, the universe must have a very specific amount of lumpiness: a Q value between one part in 1,000,000 and one part in 10,000. However, ‘inflation can produce practically any value of Q, from zero to very large values. If Q is greater than one, the universe comes pre-loaded with black holes; this really is not a good idea. The properties of the inflation must be fine-tuned to produce the right value of Q, so again we replace one fine-tuning with another.’ As Holder (2004, p. 136) observes, ‘the fine-tuning required by inflationary models is a serious drawback since inflation was meant to explain fine-tuning!’

Finally, even if there are many universes, there must still be a divine First Cause, as shown by the arguments presented in Chaps. 5 and 6.

4.5 Regularity

It has been suggested that there could be fundamental general principles in nature which determined the laws and constants of physics of our universe (Einstein 1949, p. 63).

For example, Bird (2007, p. 212) suggests: ‘If the law of gravitation is not fundamental but is derived from deeper laws (as physicists indeed believe) then it could well turn out that the value of G is constrained in a way that we do not yet understand. In which case it might be, for all we know, that the value of G is necessary.’

There are two problems with this kind of suggestion.

First, it does not solve the fine-tuning problem because the fundamental principles or laws do not uniquely determine a fine-tuned universe. ‘Physics is blind to what life needs. And yet, here we are’ (Lewis and Barnes 2016, p. 181). For example, according to our present understanding of string theory (the most promising candidate ‘theory of everything’), string theory does not predict the state of our universe but allows for a vast landscape of possible universes (Hawking 2003). Susskind notes:

The two concepts—Landscape and megaverse [i.e. multiverse]—should not be confused. The Landscape is not a real place. Think of it as a list of all the possible designs of hypothetical universes. Each valley represents one such design …. The megaverse, by contrast, is quite real. The pocket universes that fill it are actual existing places, not hypothetical possibilities. (2005, p. 381)

Thus, the string theory does not uniquely determine the laws and constants (Friederich 2018), nor does it determine the initial conditions such as the initial low-entropy condition.

The landscape, which is a large set of possibilities, ‘can’t of itself solve the fine-tuning problem; in fact, it’s part of the problem. As an illustration, the large number of possible lottery tickets is precisely what makes winning unlikely’ (Lewis and Barnes 2016, p. 305).

Second, the ‘Regularity’ hypothesis only pushes the question one step back: how could such mindless non-intelligent, non-random causes have this orderly behaviour, and how could such mindless causes generate a universe with such a high degree of mathematically describable order? As Frederick observes:

It is obviously useless to point out that some laws can be explained in terms of other laws, for example, that we may explain why matter accords with Einstein’s quantitative law of gravitation (a modification of Newton’s inverse-square law) by invoking the law that a body will pursue the easiest course through undulating space-time. That just puts the puzzle back a step. How can it be that everybody always pursues the easiest course? The explanation of some laws in terms of others leaves unanswered the question of how mindless matter, or forces, can behave in a way which accords with a law. (Frederick 2013, p. 271)

(The hypothesis that this question can be pushed back ad infinitum because there is an infinite regress of non-intelligent, non-random causes is considered under the ‘Uncaused’ hypothesis, which is refuted by the arguments presented in Chaps. 57.)

4.6 Combination of Regularity and Chance

Consider (iii) ‘Combination of Regularity and Chance’. Plato (the Laws, Chap. 10) mentioned those who denied the gods’ existence had argued that the order we perceive in the universe is merely the product of the interaction of chance and regularity. A modern-day proponent would be Stenger (2000), who argues that the laws of physics do not need fine-tuning because they are based on a combination of symmetry and the random breaking of it. However, Stenger fails to explain ‘why would randomly broken symmetry give rise to precisely the right set of laws required for life instead of the vast range of other possibilities?’ (Collins 2013, pp. 37–38). This indicates that a fine-tuning of the breaking would be required.

Cosmologist Lee Smolin (1997) has proposed a naturalistic evolutionary scenario for universes. He suggests that the singularities inside black holes are the sources of new baby universe phases that resemble their parents. As each black-hole singularity individually produces a different universe phase and, in each case, there would be a slight readjustment to the fundamental physical constants, there could be some form of ‘natural selection’ of universes, where the fundamental constants slowly evolve to obtain ‘fitter’ universes in which there are proliferation of black holes and thus produce many ‘children’. With further generations, universes with black holes and stars (including those which help support life) would come to dominate the population of universes within the multiverse. Smolin argues that there is some indication that the fundamental physical constants of our universe are indeed such as to favour a proliferation of black holes.

Other physicists such as Roger Penrose have criticized Smolin’s proposal for the speculative nature of the idea that the fundamental physical constants are readjusted as new baby universes are formed from black-hole singularities. Penrose also criticizes Smolin for the geometrical implausibility of the idea that highly irregular singularities can magically convert themselves into (or glue themselves to) the extraordinarily smooth and uniform Big Bang that each new universe would need if it is to acquire a respectable Second Law of the kind that we are familiar with (Penrose 2004, pp. 761–762). Moreover, ‘it’s probably easier just to create black holes directly in a lumpy Big Bang or by fluctuations in an inflating universe rather than go to all the bother of creating stars’ (Lewis and Barnes 2016, p. 355). Given this, the proliferation of universes with stars that support life would not be likely.

Additionally, based on the discussion in the foregoing sections of this chapter, it can be seen that, for an evolution of universes (or other kinds of ‘Combination of Regularity and Chance’) to happen, a high degree of order (such that particles do not move in billions of alternative direction at each moment, etc.) and fine-tuning (in order to avoid the Boltzmann Brain problem, etc.) must already be in place. The existence of such an order and fine-tuning remains unexplained by the ‘Combination of Regularity and Chance’ hypothesis. (As argued in Sect. 4.5, multiverse theories do not provide a reasonable explanation for this initial order and fine-tuning as well.)

One might object that Darwin’s work shows that the existence of order is not necessarily proof of deliberate creation, and that what applies to biology may well apply at other levels.

In reply, on the one hand, Darwin’s work only applies to a certain kind of order, namely, ‘intermediate order’. This is the order which, once certain ordered regularities (e.g. natural selection) are in place, certain complex systems may develop via a process over time. Indeed, as Kojonen (2021) argues, given the possibility that a Designer could work through secondary causes such as setting up these regularities and the initial conditions and using these to bring about different living organisms, and given that Darwinian explanations are actually compatible with the biological design argument in this sense, Darwinian evolution has not refuted the biological design argument at all (I argue that evolution is compatible with Christian theism in Loke 2022). Kojonen also notes, ‘In the case of complex phenomena, it is often the case that there is not just a single “best explanation,” but rather different facets of the phenomena are explained by different explanations. Getting the full explanation may require combining, rather than just contrasting explanations’ (ibid., p. 88). In other words, in the case of biology, there may well be evidence of both evolution and design (at the deeper level of what makes evolution possible) that warrants the combination of both explanations.Footnote 21

On the other hand, the argument offered here concerns ‘order at a more fundamental level’. That is, it concerns the regularities which are required to be in place in order for ‘Combination of Regularity and Chance’ to be possible. This kind of order cannot in principle be explained by evolutionary theory, since the theory presupposes the existence of this kind of order.Footnote 22 As explained in the discussion on the Regularity hypothesis above (see Sect. 4.5), the postulation of this order leaves unanswered the question of how mindless matter can behave in a way which accords with this order. (The objector might reply by hypothesizing that this order is uncaused; he/she might suggest that the combination of chance and regularity could cause design-like complexity, starting from simpler uncaused elements.Footnote 23 In reply, my arguments in Chaps. 6 and 7 against the Uncaused hypothesis would rule out such a hypothesis.)

4.7 Conclusion

I have formulated an original deductive argument which demonstrates that the following are the only possible categories of hypotheses concerning ‘fine-tuning’ and ‘the existence of orderly patterns of events which can be described by advanced mathematics’: (i) Chance, (ii) Regularity, (iii) Combinations of Regularity and Chance, (iv) Uncaused, and (v) Design. I have shown that there is an essential feature of (i) Chance, (ii) Regularity, and (iii) Combinations of Regularity and Chance which renders them unworkable as the ultimate explanation for the fine-tuning and order. The only remaining hypotheses are Uncaused and Design. One key issue is whether physical reality has a beginning, for if it does, then given the Causal Principle established in Chaps. 2 and 3 it is not uncaused. To address the key issue, I shall first discuss whether an actual infinite regress of events is possible and whether there is a First Cause in the next chapter.