Abstract
The theme of this book is the place of organization in the life sciences, especially biology. In that context, this essay is concerned with the place of organization within mind and the place of mind within the life sciences, especially biology. There are many possibilities for theories of mind, ranging from noumenal to neural to nihilist (behaviorist), and for most of these, the question of the role for organization therein makes no sense; further, they escape, or are opposed to, any deep tie to biology. Even when some link to biology is acknowledged, as for physicalisms, no inherent notion of organization appears in their development. But this chapter will present a thoroughly organizational conception of mind-as-cognition, anchored in a supportive conception of biology.
You have full access to this open access chapter, Download chapter PDF
Similar content being viewed by others
5.1 Introduction
The theme of this book is the place of organization in the life sciences, especially biology. In that context, this essay is concerned with the place of organization within mind and the place of mind within the life sciences, especially biology. There are many possibilities for theories of mind, ranging from noumenal to neural to nihilist (behaviorist), and for most of these, the question of the role for organization therein makes no sense; further, they escape, or are opposed to, any deep tie to biology. Even when some link to biology is acknowledged, as for physicalisms, no inherent notion of organization appears in their development. But this chapter will present a thoroughly organizational conception of mind-as-cognition, anchored in a supportive conception of biology.
There are three versions of how something – here, cognition – is bio-organizational, each more stringent than its predecessor. (I) Cognition is best understood from within a bio-cognitive organizational framework. (II) There is a key high-level organizational characterization of cognition. (III) At the core of cognitive function is organization. Here explanation is ultimately dynamical explanation, and these three characterizations of cognition are to be considered as three degrees of explanatory centrality for organization, rather than, for example, as three distinct conceptual kinds (see below).
Consider, in illustration, an unheated pot of fluid on a stove, its liquid molecules moving at random. There is neither ordering nor organizing. Then the stove is used to gently heat the bottom liquid layer. The liquid forms ordered horizontal layers, warmest at the bottom, coolest at the top. Molecular motion remains random horizontally, but the vertical symmetry of random motion is broken, replaced by an ordering of layers by temperature (random molecular energy) that conducts the heat slowly upwards. Finally, as heating increases, rolling boil (or Benard) cells form, vertical and horizontal symmetries are broken and random motion is replaced by a pattern of cells within each of which molecules move circularly, conveying hotter liquid up to the fluid surface and cooler liquid back down to be reheated, their intra-cellular circular motions so arranged horizontally that at each pair of adjacent cell surfaces they are moving in the same direction. The whole manifests moderate order (horizontal) and moderate organization (vertical and horizontal). Then, following the three nested requirements for organization as fundamental, we have the following: (I Benard) The phenomena are indeed best understood from within the molecular organizational framework given above. (II Benard) There is a key high-level organizational characterization of the phenomena as representing a succession of molecular arrangements providing increasing heat transfer capacities. (III Benard) At the core of this succession lies breaking symmetries, partly through ordering (vertical stratification), but with the largest capacity shift achieved through coordinated horizontal and vertical reorganization. Nor is more needed for core understanding; (within limits) it does not matter what the fluid is, nor what the heat source is, or what the pot is made of, the sequence of pot states will recur.
5.2 Characterizing Organization
The intracellular Krebs cycle is a useful model of organization. Its function is to transport energy into the cell and eject waste. It is made up of several molecular steps and produces several products, each combining a specific external input with the current internal chemical to dynamically lead to (produce) an output internal chemical for the next step. It is typically diagrammed as a large cycle with several nested cycles driving off it and ordered around it.Footnote 1 Organisms are congeries of such kinds of processes, nested from the subcellular (Krebs cycle) to whole organism (e.g., respiration), all component processes appropriately space-time interrelated.
This is not so different from a motor vehicle engine where all of the many kinds of components are very different from one another (cf. fuel injectors, spark plugs, cam shafts) yet are interrelated in many distinctive ways so that together they perform the transformation of fuel into linear motion. We can think of this as an interrelated structure of sub-functions – fuel injections into cylinders, cylinder heads rising up, sparking the injected fuel, etc. – that together bring about the overall global function. In a clarified ontology, each sub-function is realized as a causal process (one driven by an energy gradient) that takes its start as the function initial condition and moves dynamically to generate the function end condition. In many situations, sub-functions and their realizing dynamical processes may come and go as part of the overall function/process. Immediately after sparking a cylinder head, a large energy gradient forms in it which forces it back along its cylinder shaft. But this gradient only lasts until the fuel is “burnt.” Then another cylinder takes its turn. Similarly, there are many biochemical interactions in molecular biology that only briefly exist while some momentary, but precisely located, function is realized. ln the case of the engine, the constraints that structure and stabilize these processes include the entire engine frame and are much longer lasting than individual cylinder processes. And this is common for current human-made machines. But in molecular biology, it frequently occurs that the whole realizing processes, energy gradients and constraints are ephemeral, changed by equally ephemeral processes of which they are temporarily a part (cf. a seasonally eroding river bank and its flow). This should be understood as normal. The core process organization that grounds cognition relies on just such a structure (see III below).
“Organization” has a narrow and a wide usage. In its narrow usage – n-organization – it means possessing internal, nested correlations of the general sort illustrated above in the Krebs cycle and car engine. In its wide usage, ‘‘organization’’ means no more than ‘‘is in some respect, to some degree, systematic,’’ as in having a well-organized work desk. In this wide sense, one may speak of hierarchical organization though only an ordering by parts and composition is intended, and whether or not internal nested correlations are involved. ‘‘Self-organization’’ as commonly used includes molten iron cooling down to a solid bar (the ion lattice is ‘‘well organized’’), and ordering coins by size through random vibration against varying mesh-widths. In neither case is n-organization part of the output. And in neither case is there any more than the faintest suggestion of a ‘‘self-active’’ process. But both of these examples have new constraints as output. This feature generalizes: self-organization is best conceived as a process leading to the emergence of new constraints, whether or not they produce n-organization and whether or not there is an active self involved (see Hooker, 2011c). Here we are concerned only with n-organization where, as we shall see, it forms a distinctive class of biological conditions.
Ultimately, all n-organization is grounded in dynamical processes, as are all non-organized (a-organizational) states and behavior. N-organized dynamics grades into a-organized dynamics (i.e., plain old dynamics) as the internal processes show decreasing variety, decreasing uniqueness and complexity of collective functions realized and increasing dependence on specific dynamical conditions. But the universality of dynamics is the same in all cases. Two billiard balls colliding show no n-organization but are fully dynamical; the Krebs cycle is strongly n-organizational but each transformation is fully chemo-dynamical. N-organization carries only relationship or form, not quality; quality is carried by dynamics, including the dynamics that grounds relationships or form. This applies to cognitive accounts (e.g., Russell’s electrical charge, Penrose’s intracellular coherent quantum states – Russell, 1927; Penrose, 1989). Here only n-organizational character will be considered.Footnote 2
N-organization and Order
In terms of interrelations between components, n-organization lies between complete disorder and complete order. Complete disorder is where all component interrelations are random, so that there is no simplifying multi-component pattern which constrains their interrelated behaviors. With completely ordered components, there is a governing pattern, illustrated in soldiers marching in tight formation, or crystals in a uniform lattice, and also distinct from the random collection of its components. With n-organized components, there are also governing patterns, but these can be much more complicated and subtle than the simplicities of complete randomness or complete orderedness (cf. Krebs, engine).Footnote 3 Wholly random and wholly ordered are poles of zero internal n-organization, all n-organized systems falling somewhere between them. Bennett proposed the notion of logical depth to capture a formal notion of n-organization located along this continuum (Bennett, 1985). Roughly, logical depth is the number of nested correlations within correlations in an entity. This is certainly an important step in the right direction because it places distinctive correlations at the heart of n-organization. But obtaining a satisfactory measure for degree of n-organization is not easily done.Footnote 4 Further exploration lies beyond the scope of this paper.Footnote 5
5.3 N-organization and Bio-cognition
Briefly, Looking Ahead
First, it is argued that a specific kind of n-organization, called autonomy, characterizes all and only living organisms. Autonomy is shown to ground all the major n-organizational aspects of agency. Second, cognitive agency, the main objective here, is in turn shown to be a sub-class of autonomous agents and so ultimately a specific class of n-organized systems. Third, cognitive agency spans a range from elementary to deep problem-solving powers, a range that can be characterized n-organizationally. In sum, autonomy > agency > cognition > deep cognition, with each step along the way, distinctively and strongly characterized n-organizationally. With this framework in mind, let us proceed.
5.3.1 Autonomy, Agency (and Robotics), Auto(self)-directedness, and Anticipation
Autonomy
Our concern in this paper is with the place of n-organization in a biologically centered account of mind. Even so, it is essential to begin with at least one aspect of the wider issue of the place of n-organization within biology generally. N-organization lies at the heart of what an organism is and when we properly understand how that is, we shall have constructed the basis for an n-organizational account of organism minds.
At their most basic, all living things are thermodynamic engines, existing in a far-from-equilibrium condition only maintained by conversion of an input flow of negative entropy (food) to do work and by the export of unutilized material to the environment (wastes). Essential work is of three kinds: (i) the repair or replacement of internal infrastructure, including of any enclosing membrane, and of the capacity for suitable work, (ii) the support of action in the environment, and (iii) the export (elimination) of wastes. This is already an n-organizational arrangement, focused around two cycles, an external interaction cycle with the environment comprising resource extraction and waste elimination and an internal action cycle comprising repair and replacement.
There are various obvious constraints on successful versions of this n-organizational arrangement: (C1) the negative entropy input flows have to arrive in a timely manner, at appropriate places and in appropriate quantities, to sustain all the organism’s processes; (C2) the internal work doable on these flows by the organism must produce sufficient components to fully support the internal repair work, including reproduction of the repair capacities; and (C3) at the same time, their consequent resource exploitation and waste accumulation must be extractable and exportable by the organism at sufficient rates and volumes as to avoid both direct damage to the organism internally and indirect damage via environmental damage. Despite their apparent particularity, these constraints are in fact permissive in form. For instance, it does not matter whether the food-gathering action is largely passive (e.g., a pitcher plant trapping insects) or active (e.g., a dragon fly hunting insects), discriminating (e.g., a koala’s taste for eucalypt leaves) or indiscriminate (e.g., the pitcher plant); it matters only that it satisfies at least the constraints C1–3.Footnote 6
The condition for organism viability is that each cycle is supported and the two cycles so interact as to meet constraints C1–3 above. This n-organized dynamical viability condition is called autonomy.Footnote 7 It picks out all, and plausibly only, living individuals – from cells to multicellular organisms to various multi-organism communities, including many (but by no means all) business firms, cities and nations. There is an issue of how sub-function processes might exactly fit together, each helping to canalize others (e.g., Kauffman’s work-constraint cycles, Kauffman, 2000), to achieve self-reproduction on a sufficiently small scale (contrast engine repair and the whole economy), but in principle some combination of longer-lasting and ephemeral process constraint formation should do it.Footnote 8
The name is appropriate: in autonomous systems, the locus of living process regulation lies more wholly within them than in their environment. Birds use twigs to make nests, but twigs themselves have no tendency to use nests or birds to any purpose. Hence the root sense of autonomy in the traditional social sense. Moreover, there is a richness to the notion that escapes the bare appearance of inter-locked cycles. Autonomous entities have a distinctive wholeness, individuality and perspective in the world derived from the global, interconnected nature of their cycles and the regenerative condition they sustain. This gives rise to achieving (or not) an integrated condition of satisfying autonomy (or not).Footnote 9 Further, when this satisfaction condition is available to the organism itself as some kind of associated signal (e.g., absence of enclosing membrane stress), then the autonomous system has a basic sense of normative requirement. A situation will come to be identified as injurious (reduced integrity), healthy (increased integrity), or neutral, an evaluation that amounts to a distinctive normative perspective. In this manner, autonomous system activities are also willful, anticipative, deliberate, normatively self-evaluated, and adaptive. Such entities are properly treated as genuine agents. Autonomous systems are inherently all of those things.
Agency and Robotics
Meanwhile, let us pause to briefly consider technologies in relation to autonomy and the possibilities of autonomy-based robotics. The dominant difference between biology and technology is, as the petrol vehicle illustrates, that organisms are much more active, responsive and integrated entities than technological systems are, or are often capable of being. A primary difference lies in the inner loop. Vehicles are not self-repairing. Their metabolism is outsourced to repair specialists (mechanics), and from there – via manufacture of spare parts and tools for repairing, the process strengthened by human n-organizational technologies such as pacemakers – to the rest of the economy. Plants do reproduce branches and roots, and both they and animals adaptively alter their bodies in response to environmental interactions, but in animals these alterations are mostly confined to nervous systems, while plants self-maintainingly adapt their bodily forms to support photosynthesis; and both, like vehicles, rely on a larger ecology for the resources to do so, and so on. Pursuing these analogies raises issues concerning how widely distributed, ‘‘socially’’ interlocking and interactively open an agent’s body may be, and conversely how deeply capacity-modifying protheses may be integrated with ‘‘natural’’ agents, and how much does reliance on surrounding ecology for repair differ n-organizationally from reliance on societal economies (cf. notes 9, 10). A second primary difference lies in the external loop: organisms are much more active in responsively regulating their interactions with their environment, and within themselves. This difference is deeply rooted in organism autonomy which provides them a self-orientation to the world that works on integrating many streams of information (perceptual, proprioceptive, affective, etc.), using them to enrich and modify their anticipative interaction models and the directed responses to which it gives rise. Finally, organisms show a wide variety of boundary forms and defenses, from an identifiable exclusionary membrane offering regulated intake of specific nutrients (e.g., gastrointestinal membrane) to socially constructed maintenance of internal community regulation (e.g., through mating roles) and to a highly inter-penetrating film through which DNA may be directly interchanged. These differences are rooted in differences of n-organization.
This autonomy-based characterization of agency meets all the criteria for being deeply n-organizational: (I) Agency is best understood from within a bio-cognitive n-organizational framework, namely, the inter-locking cycles underlying agent autonomy. (II) There is a key high-level n-organizational characterization of agency, namely, as expressing autonomy. (III) And since every capacity of agency is based on autonomy, whose core is n-organizational, the core of agency is n-organizational. The distinctive n-organizational character of life penetrates deeply into its nature, into universal roots constituting agency. And it will be on this basis that any account of mind as n-organizational will be built and find its place in biology.
Meanwhile, there remain 3 + 2 robotics issues. (A) How might the constitution of an integrated internal perspective be achieved, if at all? What role has autonomous n-organization in the construction of robotic focused and responsive bodies? What might their perspective on the world be? (B) How does the manufacture of tools by tools and commodities by commodities proceed and how, if at all, does it include all elements (And how is it shown that the manifest tool improvement that does in historical fact occur within it, can actually occur within it?) (C) What is the biology and sociology of boundaries, how can these be constructed robotically and with what consequences for internal n-organization? Each of these presents a deep and subtle problem. They are left for the reader to consider, as is their impact on the n-organizational character of these aspects of natural and artificial existence. Only after these issues have been addressed will there be a proper platform for addressing the issue. (D) What are the limits to autonomy? And then (E) can there be a truly autonomous robotics?Footnote 10
Returning to the main argument, its overall structure is as follows. Self-directedness and anticipativeness are two fundamental cognitive capacities harbored by autonomy. Mutually supporting one another, these capacities form the central cognitive process of self-directed anticipative learning (SDAL). SDAL in turn provides the foundation of the deepest, most powerful forms of problem-solving, that is, of cognition, and of tracking, that is, of intentionality. Thus, intention and cognition are provided their common n-organizational root.
Auto(self)-Directedness
Auto- or self-directedness [the latter, more common, version will be used] is the capacity to self-modify interaction in the light of its evaluation by the directing organism. Changing behavior to acquire newly available food (e.g., spring flower nectar) is one example, and changing behavior to manage pain is another. Such sensitive, conditionalized attention forms the intertwined root capacities of intention and cognition (cf. Christensen & Hooker, 2004). A mosquito has one known such process (whether or not to initiate search for a blood host – Klowden, 1995), and a mammal has a vast number of such conditionalizing processes, especially within its motor regulatory system. Cycles (the n-organized aspect) of signaling and initiating specific actions within the external interaction cycle, and evaluating their outcomes against autonomy support through the internal interaction cycle, provide the basic autonomy n-organization with strong outcome-led self-directedness. In appropriate context, something about the direction of value increase (i.e., autonomy support) is also provided (e.g., by testing small departures in various directions from the present setting to see which is more rewarding). In more sophisticated form, self-direction allows the interplay of multiple evaluative signals, combining, compromising and conditionalizing them when arriving at which values are appropriate for guiding action in the context, recognizing corresponding multiple streams of information as relevant to those decisions, and in that light follows their integration for regulation of decision-making. The more mutually convergent guiding values and streams of information the learner has about performance, the more effective its actions can be. Initially, guidance will be limited because of learner ignorance, while at the concluding stage, information will have been sufficiently enriched, focused and integrated into the interaction cycles as to allow the learner to converge on a solution.
Anticipation
There is another, closely related, feature that the mosquito and the mammal share (very unequally): anticipation. Reflex and random actions aside, every action anticipates its outcome. At its most primitive, anticipation is the forming (learning) of a simple association between current features and an outcome of an action. The bee dance anticipates re-locating ephemeral nectar supplies as outcome; it would not be attended to unless that outcome and its attendant resource availability were frequently enough the consequence of the dance.Footnote 11
Anticipative learning is where the organism learns to anticipate a goal achievement by employing an action sequence, thus associating receiving goal satisfaction with doing an action sequence. Elementary associative learning such as neural conditioning provides the simplest anticipative associations. More sophisticated versions of this process can be elaborated as learning capacities widen. For instance, though much more sophisticated than the mosquito, the cheetah swinging rightwards chasing a dodging gazelle with a right-swing bias is still doing so anticipating a desirable outcome (a kill). But with the cheetah, all the mammalian apparatus of planning ahead, tracking trajectories for oneself and others and so on is put to use managing these interactions fluidly and at high speed. The cheetah’s many associations – approach downwind, remain camouflaged where possible, maintain prey separation from herd and so on – have come to be integrated in richly associated models (here of the hunt). Bringing all these capacities together, in n-organizationally mutually supportive ways, provides the close attentiveness to problem-solving that is the core of intentional cognition.
Self-directed Anticipative Learning [SDAL]
The combination of self-directedness and anticipative action provides the basis of fluid self-steering. An action is undertaken in anticipation of achieving a goal; if it does so, the anticipation is entrenched, and if it does not, the action may be repeated, extended or modified, at the actor’s self-direction. In this way, the actor steers itself through a process of learning its environment. The capacity this invests in its agents is adaptability. The ultimate goal of external adaptation is internal regulation, i.e., to be able regulate the operation of the twin autonomy cycles so as to continue to satisfy autonomy, in the environmental circumstances obtaining. However, in a dynamic environment, where creatures are constantly changing (e.g., their current location and posture), often across many fronts and on many time scales, detailed adaptation is momentary and only approximated. Instead, it is necessary to be adaptable, able to adapt once useful adaptations as new conditions emerge (run from a predator, switch diet, migrate, etc.) There are limited physiological adaptabilities, most subconscious and of fixed operation (e.g., callous formation, switching to burning visceral fat to extend flight in extremity). But the largest, most variable and most rapidly adaptable are the behavioral adaptabilities, from singing to flying to technology construction, regulated by the central nervous system and largely expressed through the motor system.
These latter features (largest range, most variable, most rapidly adaptable) do not in themselves constitute more than small augmentations of cognitive power. Fluid adaptation ranges from the superficial to the deep, and these add finesse to the superficial capacity. That the flatworm withdraws into the shadows in a larger range of ways and circumstances, more variably, and faster, when a light is shone on it, does not modify its simplicity, or its fixity, of response. Superficial adaptation offers fixed information channels and evaluation routines that provide only first-order fixed responses to changing situations, the whole working off an n-organized algorithm without the need of higher-order regulation, something that fairly cheap route planners and guided missiles, along with rafts of insects, worms and others, can provide. Moving toward greater capacity involves increasing numbers of conditionalizations, supporting increasing spread and discrimination of judgment. Though always useful, this level of fluid but fixed regulation cannot surmount significant shocks such as failure to recognize anticipated response sequences, or interaction dynamics altering mid-action.
Beyond this impasse lies the introduction of increased layers of higher-order conditionalizations, offering increasing orders of responsiveness and increasing integration of responses. Sufficiently developed, such higher-order, integrated judgment formation underlies powerful new dimensions to problem-solving, for instance, the capacity to respond to a ‘‘mis-match’’ signal as indicating, not merely a new trial in response, but a change in investigative methods used. Consider discovering through a mis-match signal (e.g., unexpected viral outbreaks) that the present testing method has an unexpected high false-negative rate (say in pharyngeal swab testing for a viral infection), requiring a change in testing method to achieve greater reliability in estimates of infection rates and hence in demand for healthcare resources, and so on. As well as method change, consider also bringing about reformulations of the problem to hand (‘‘It’s not the measuring process, but it’s the modeling of sub-population interactions we are using’’), changed criteria for successful outcomes (“predictions of infection breakout locations and frequencies accurate to within 10%”), changed external constraints framing the problem (‘‘rural sub-populations are much more constrained by travel times’’) and changed criteria for “cleaned” data supporting these judgments (‘‘estimates of false positives as well as of false negatives are required’’). As will appear, supporting the integration of these features will provide deep fluid adaptation, or deep adaptability, the mark of truly human intelligence. (In this respect, we are a long way yet from deeply intelligent robotics.)
To see how these features work together, consider a detective conducting a murder investigation. She uses clues from the murder scene to build an initial proposed profile of the suspect and then uses this profile to focus the direction and methods of the investigation. Lipstick on a glass suggests a crime of passion, with the suspect female, in a personal or sex worker relationship to the victim. The profile tells the detective what the murderer may be like and what characteristic types of clues to pursue. For a crime of passion, look for further personal effects – special clothes, whips or other ‘‘technologies’’ in producing sexual effects, etc. Look too for places nearby, possibly frequented for romantic assignations, a romantic bar, a brothel, etc. The chosen profile in turn sets new intermediate goals, for example, narrow down the nearby places frequented, eliminate or reduce the likelihood of the suspect being a male cross-dresser, but conversely try to obtain an estimate of how many women might be involved. If the chosen profile is at least partially accurate, and with a little luck, the modified investigation will uncover further evidence that in turn further refines the search process, ultimately culminating in the capture of the murderer, and resolving the nature of the investigation.
But such searches are not fixed; a good detective will have in mind other possible profiles awaiting supporting evidence. Further search of the murder site, for example, may uncover a gambling note for a substantial sum. This turns attention to enforcing debt default as the kind of crime involved. The lipstick cue does not fit comfortably into this version; the culprit is more likely a male, with a history of criminal activity and likely enforcer violence. This profile redirects the search from sexual partners to gambling associates and perhaps money laundering and the like. From this point of view, the lipstick is mis-directing; perhaps it belonged to an attempt to persuade the victim to settle his debts, or was indeed worn by a cross-dresser, but just as a personal quirk, irrelevant to the financial issues at stake, or planted to ‘‘throw the investigation off the scent.’’
It is the interplay between the discovery of clues, the construction of a suspect profile and subsequent modification of the investigation that makes the process self-directing. It is powerful self-direction because it encompasses re-thinking the nature of the investigation (here from sex to gambling), contextual assumptions (here from assignations to debt collections), data (lipstick from evidence of lover’s presence to irrelevance), and solution types (from identification of passionate conflict, murder process and culprit motive to identification of debt association and assassin presence and actions). As an organism interacts in an SDAL process, its improving anticipative models and model-based interaction processes allow it to (a) improve its recognition of relevant information, (b) perform more focused activity, (c) evaluate its performance more relevantly and precisely, and (d) learn about its problem domain more effectively. Indeed, in this setting, error itself can be a rich source of context-sensitive information that can be used to further refine these four features.Footnote 12 The richer the system’s anticipative and norm structures are, the more directed its learning can be, and the more potential there is that learning will improve the system’s capacity to form successful anticipative models of interaction. To this the detective adds an additional kind of learning, higher-order learning about the entire domain of murders. It is the capacity to learn across many such investigations what sorts of profiles there are; what are their general features and rare exceptions; what their associated kinds of investigatory methods, timetables and costs; and so on and to recognize when there is more to learn and how to be alert to doing it that makes the process such a powerful problem-solving tool.
When that kind of ‘‘double-loop’’ learning occurs, the detective is both learning what works, or not, in the immediate investigation to hand and at the same time using that experience to improve general knowledge of detecting murders, and crimes more generally – knowledge that will in turn be used to improve the next investigation. In short, by learning a higher-order characterization of the problem class (murder investigations), she will have been learning how to learn about investigations in that domain while learning how to solve specific problems to hand. Just this is the fundamental bootstrap required for all learning to be improvable. It forms the key to understanding the n-organization, and thence the general power, of the learning process. Indeed, this process allows the rational resolution of initially ill-defined problems, problems whose formulation and structure are vague, gappy or ambiguous, or tacitly internally conflicted, or whose valid methods are unsettled, like how to detect or marry well, or validly test a scientific theory in a new domain. Such problems of necessity lie at the root of every new problem-solving domain.
It is possible to synthesize a model process for such learning-how-to-learn-while-learning processes. Each of the five foci or nodes of learning identified above (method, problem formulation, solution formulation, constraints, data) are represented. As each specific learning process is under gone, attention shifts from one node to another, or to several nodes in parallel, as the potential consequences of experimenting with alternatives are explored. (Cf. the different investigations formed by the detective’s various crime profiles.) Eventually (and with some luck), the investigations are reduced to one, the one that resolves the core detecting problem. Although all investigations share the same n-organizational form, the non-organizational features play their decision structuring and making roles alongside them and varying from incidental to central. A measure of their importance is the degree to which they must be appealed to at each choice point. For this reason, there is no specifiable model, let alone algorithm, for the order in which nodes are visited, nor for what changes are consequently made, nor for how these changes in turn spread across the nodes, nor for what kinds of compromises are made in reaching for an enriched solution, and so on.Footnote 13
Yet the model does capture the fundamental n-organization of kinds of actions that deliberative problem-solving centrally involves. In its lowest form, this n-organization is expressed in the cyclic processes of specific trial-and-error interrogations. Moving to higher-order organization, these n-organized trial-and-error processes are nested within sharings of information about how to coordinate the findings from several such trials covering these kinds of crimes. Every cheetah hunt, and every detecting, is unique in its qualitative details but they are also all the same as n-organized hunting processes. In particular, they all share the higher-order prospects of reformulating the problem and/or the solution, changing and nested again inside more general formulations of investigating crimes of these general kinds. This tri-layer of nested cyclings is of the same general form as the Krebs cycle (above), but here its n-organizational depth is much greater because, for example, at each of its nodes, it stores structured content about the domain related to that node, and stores cross-nodal interrelations pertinent to the domain involved, both contents increasing their richness as problem-solving multiply, none of which the Krebs cycle has available. In this enriched form, the problem-solving model has deeply illuminated the 30 years of research into the linguistic capacities of apes, even how (pace Kuhn) rational deliberative problem-solving can proceed through scientific revolutions.Footnote 14
5.4 In Conclusion
It remains to reiterate that this model of problem-solving is primarily n-organizational. No matter the domain concerned, this moderately n-organized, moderately ordered process successfully models the general problem-solving process. The underlying sense of agency on which the problem-solving SDAL process is built is fundamentally n-organizational, satisfying the three criteria for being essentially n-organizational: (I) best understood from within a bio-cognitive n-organizational framework, (II) has a key high-level n-organizational characterization, and (III) its core is n-organizational. The distinctive n-organizational character of life penetrates deeply into its nature, into its universal roots constituting agency. And now it is on this basis that the roots of cognition have also been revealed to be in essence n-organizational. (I) Problem-solving is best understood from within a bio-cognitive n-organizational framework, here that of autonomy-based bio-agency, with its distinctive accounts of identity and normativity. (II) There is a key high-level n-organizational characterization of problem-solving, namely, that of the general SDAL problem-solving process model. (III) The core of problem-solving is n-organizational because it lies within the improvable, enrichable tri-nested cyclicities of the general problem-solving process model. Goal-pursuit is an inherent, if moderate, n-organized process; it marshals the steering sub-processes – anticipation and self-directedness – to orient to the goal and to explore self-improving ways to move toward it. SDAL, equipped with higher-order regulation, is inherently this n-organization. Global-level n-organization is emphasized by the steering processes in SDAL, which are typically higher order. This completes the n-organizational ‘‘golden thread’’ running throughout biology, ultimately integrating mind into living being.
Such n-organizational principles evidently have but small extension beyond life to the cosmos at large. While the inanimate world has n-organization – wherever ‘‘mechanical’’ cyclicities operate, for instance in rolling boil formation (Introduction: Benard cell) – n-organization evidently does not lie deep throughout the cosmos as it does throughout biology. The inanimate makes more use of order than n-organization. No doubt this reflects the simplicity of orderedness and the priority in time of the inanimate world with the emergence of life within it. This makes living n-organization, autonomy, the more remarkable.Footnote 15
The details of the general problem-solving process will vary across subject matters. The golden thread of n-organization abstracts from these differences to locate a fundamental n-organizational category: life. It is thinkable that it might not have been so. Understanding how that n-organizational category is possible will involve tracing it back to the fundamental qualities as we know them, the quantum and relativistic qualities: mass, spin, charge, and so on, along with those that structure irreversible thermodynamics. There is at present no neat accepted story here, and the problems are so deep and unresolved as to make it thinkable that there is none for finite mortals to have. The complications rise further if those qualities associated with mind, the perceptual and emotional qualities, are included. It is always possible to try for a purely process account of these, or for a more n-organizational one, though how complete they can be also currently remains open.Footnote 16
Notes
- 1.
- 2.
An instructive case of dynamics in organization in this setting is the notion of levels of organization. See, e.g., Eronen and Brooks (2018). These, like emergents (see text above), can be made up of dynamical constraints, within which the dynamics takes place (e.g., systems of double pendulums for constrained chaos), but they can also be externally unconstrained, their system-wide stabilities an outcome of their internal interactions (e.g., gravitational solar systems with planetary moons). Conditionalization within systems can also be by dynamical switching, like fast constraint formation, or as slower dynamical transitions to new interaction basins. (Cf. SDAL below.) The assumption that all these differences must instead be conceived logically has great difficulty in understanding any of them. As with weak organization (above), there is also a weak notion of level of organization where it names only commonalities of spatial scale, e.g., in the common ‘‘hierarchy of life,’’ representation (cells, multicellular organisms, populations, etc.) Dynamical systems may have scales of statistical aggregation of various dynamical kinds, all consistently with also having cross-scale dynamical interactions. Such dynamical distinctions are likely to play important roles in accounts of brain function underlying cognition and other mental capacities, but not when confined to logical models of brain function where dynamics is neglected. (The otherwise useful review of the conventional literature by Eronen and Brooks, e.g., makes only occasional mention of dynamical levels and does not explore the consequences of a systematic dynamical approach. See further, e.g., Hooker, 2004, Sect. 5, 2011d, Sects. 4–5, for expositions of agency and cognition in dynamical terms, as in Moreno & Mossio, 2015; cf. Hooker, 2011b; Hooker & Hooker, 2018.)
- 3.
- 4.
For instance, correlations can be used to specify both ordered and n-organized states, so when is each supported? How are cycles that stay within an order (e.g., a cycle within a device) compared with those that move across functional orders (e.g., a cycle that includes both machine and regulatory administrative states)? How are these to be compared when system n-organization is vertically modular versus horizontally modular? And so on.
- 5.
As note 4 illustrates, there is at present little to be gained from pursuit of precise definitions, formal or otherwise, for the foregoing distinctions, or for similar notions to come concerning agency and cognition. Rather, there are good examples on which to rest creative conceptions, as a way of moving forwards constructively. This approached is buoyed by noting that even in the most developed domains, like physics, definitions, if they come at all, come after the domain has been thoroughly understood, not beforehand, while pursued too early, they can stifle deeper explorations.
- 6.
This example makes it obvious that there will be a raft of particularities characterizing the many different ways to satisfy these constraints. In addition, further rafts of particularities will characterize nearer satisfactions that strictly do not fully satisfy all the constraints but do so nearly enough, long enough for organisms to replicate before dying, and so on. Again, there is at present little fruitfulness in attempting to explore these n-organizational byways.
- 7.
- 8.
See Moreno and Mossio (2015, Chap. 1) and its Foreword by Hooker (pp. x, xi). This remark covers a complex issue: how is autonomy to be understood? The origins of the notion of autonomy lie with the biological ideas concerning the nature of cellular life by Maturana and Varela (Varela et al., 1974; Varela, 1979; Maturana & Varela, 1980, among others) and attempts to construct formal principles that distinguish living forms (Rosen, 1991; cf. Smithers, 1995). Here the notion of a closed set of states, e.g., one that regenerates metabolism, plays a central role. Such closures were then seen as the mark of the living and sought everywhere, e.g., among information states as the mark of the cognitive. Every organism was ipso facto a cognitive entity (Maturana & Varela, 1980). Some reflected this position back on to the constructive idea that every closure loop of states would give rise to a semantic system of symbols so that autonomous entities were ipso facto internally meaningful cognisors (e.g., Stewart, 1996; Pattee, 1993, 1995, 2007).
- 9.
As an alternative to the cognisor approach, others distinguished between dynamical (energetic, material) closure and functional closures (see, e.g., Barandiaran & Moreno, 2006; Barandiaran et al., 2009). It is clear that organisms cannot be energetically closed because the laws of thermodynamics require that they replace higher entropic degraded states with lower entropy (more ordered) ones, and often will not be materially closed when doing so (e.g., nucleic acid diffusion across common boundaries among slime molds; the several vitamins that humans cannot manufacture but must import. This requires identifying some other features that characterize closure. Moreno and Mossio (2015) choose dynamical constraints as that characterizing closures and argue that while such entities do no work in a system and have none done on them, they ‘‘guide’’ the reconstitution of autonomous systems (Moreno & Mossio, pp. xxvi–xxx). This turns out to be a challenging set of requirements to sustain (Hooker, Foreword pp. x–xi, Hooker, 2013b). It also leaves the origins of cognition and semantics to be explained (cf. Moreno & Mossio, 2015, Chap. 7). Others attempt to have constructive interaction in the context of autonomous organization bearing the weight of understanding how cognition and semantics emerge within autonomous systems (e.g., Christensen & Hooker, 2000b, 2002). The roots of these approaches lie in C19 biological theorists like Simmel, 1895 (see Coleman, 2002; Hooker, 2013a; von Uexkull, 1926). The approach opens up an integration of cognition and semantics via intentionality as a Merleau-Ponty (1942/1963) close interactive ‘‘grip’’ (see above and e.g. Bickhard, 1993; Christensen & Hooker, 2004; Hooker, 2009), cf. Di Paolo, 2003.
- 10.
There are differences among researchers as to how relationships among autonomy, agency, and cognition are properly drawn and this impacts development of a dynamically based account. Compare, e.g., Moreno and Mossio (2015), Chaps. 1, 4, and 7, where each new concept represents an elaborated aspect of the preceding one, with one where autonomy, agency, and cognition are each aspects of the same core n-organizational development (Christensen & Hooker, e.g., 2002, 2004). This latter account needs a problem-solving, as opposed to logic rule applying, conception of rational process and a similar ‘‘n-organized focus’’ account of intentionality that unifies its development with that of cognition and both with agency (note 9), obtaining a unified, dynamically based, n-organizationally characterized, core framework. The specific dynamical ontology that potentiates this framework, and may support a transition from cognition to a broader mentality, is left open here (cf. notes 8, 9, 16, 17). On the matter of the constitution of an internal perspective and artificial robotics, see Christensen and Hooker (2004), cf. Moreno and Mossio (2015), Chap. 7, and Nolfi (2011). On the biology and sociology of boundaries, see, e.g., Rayner (1997), Nolfi (2011), Bickhard (2011, Sect. 3.1), Bishop (2011, Sect. 3.5), and Hooker (2011b, Sect. 3). Christensen and Hooker (2004), followed by Barandiaran and Moreno (2006), provides a critical perspective on formal robotics – dynamical systems theory (DST, van Gelder variants) and autonomous agent robotics (AAR, Brooks/Braitenberg variants) – and analysis of cognitive theory that could integrate with them, while DiPaolo (2003) provides a complementary examination of AAR as anchored in the Maturana/Varela tradition. More widely, Ruiz-Mirazo, together with Moreno and others, pursued the related issues of how minimally artificial chemical cells could be constructed and how they would need to be additionally internally constituted if they are to form evolving communities (e.g., Moreno, 2007; Ruiz-Mirazo et al., 2008; Ruiz-Mirazo & Moreno, 2012; Arnellos et al., 2014).
- 11.
For a sensitive and powerful exposition of steering, goal-directed regulatory n-organization in mind, see Sommerhof (1974).
- 12.
- 13.
Popper, who emphasized the importance of falsification (‘‘signal mis-match’’) in scientific method, missed this power to scientific cognition by confining himself to just immediate logical structure, where indeed a falsification conveys no more than ‘‘something is false somewhere.’’ See Popper (1980), cf. Hooker (1995, 2010), Hoffmaster and Hooker (2018).
- 14.
The detective, for example, has been developing the crime-of-passion profile, impressed by the initial lipstick clue and visits to nearby gay bars, but it has proven increasingly hard to find further useful clues. Several lines of investigation have been proposed and their consequences pursued, for example, that the lipstick belongs to a relative of the victim, leading to tracking down family members and examining any tensions in these relationships, and so on. As these lines dried up, the pressure mounted to look elsewhere, for example, to business dealings, with trial options ranging from gambling debts to defaulting debtor relatives.
- 15.
- 16.
The status of biochemically, dynamically characterized autonomy and the scope for inanimate autonomy, in relation to the positions mentioned in notes 2, 8, and 9 are left as issues for the reader.
- 17.
That there is no neat account of the fundamental metaphysics of mind on offer here emphasizes that naturalist fallibilism remains. Abstraction simply stops where principle is bracketed along with detail; there is no commitment to a formalist n-organizational idealism here.
References
Arnellos, A., Moreno, A., & Ruiz-Mirazo, K. (2014). Organizational requirements for multicellular autonomy: Insights from a comparative case study. Biology & Philosophy, 29, 851–884.
Barandiaran, X., & Moreno, A. (2006). On what makes certain dynamical systems cognitive: A minimally cognitive organization program. Adaptive Behavior, 14(2), 171–185.
Barandiaran, X., Di Paolo, E., & Rohde, M. (2009). Defining agency. Individuality, normativity, asymmetry and spatio-temporality in action. Adaptive Behavior, 17(5), 367–386.
Bechtel, W. (2006). Discovering cell mechanisms: The creation of modern cell biology. Cambridge University Press.
Bechtel, W. (2007). Biological mechanisms, organised to maintain autonomy. In F. Boogard, F. Bruggeman, J.-H. Hofmeyr, & H. Wesyerhoff (Eds.), Systems biology: Philosophical foundations. Elsevier.
Bechtel, W., & Abrahamson, A. (2011). Complex biological mechanisms: Cyclic, oscillatory and autonomous. In Hooker (2011a).
Bennett, C. (1985). Dissipation, information, computational complexity and the definition of organization. In D. Pines (Ed.), Emerging syntheses in science. Proceedings of the founding workshops of the Santa Fe Institute. Addison West Publishing.
Bickhard, M. (1993). Representational content in humans and machines. Journal of Experimental and Theoretical Artificial Intelligence, 5, 285–333.
Bickhard, M. (2011). Systems and process metaphysics. In Hooker (2011a).
Bishop, R. (2011). Metaphysical and epistemological issues in complex systems. In Hooker (2011a).
Christensen, W. (2004). Self-directedness, integration and higher cognition. Language Sciences, 26(6), 661–692. Cognition and Integrational Linguistics, special edition.
Christensen, W., & Hooker, C. (2000a). An interactivist-constructivist approach to intelligence: self-directed anticipative learning. Philosophical Psychology, 13(1), 5–45.
Christensen, W., & Hooker, C. (2000b). Organised interactive construction: The nature of autonomy and the emergence of intelligence. In A. Etxeberria, A. Moreno, & J. Umerez (Eds.), Communication & Cognition 17(3 & 4), 133–157. Special Edition, The contribution of artificial life and the sciences of complexity to the understanding of autonomous systems.
Christensen, W., & Hooker, C. (2002). Self-directed agents. In J. MacIntosh (Ed.), Contemporary naturalist theories of evolution and intentionality, Canadian Journal of Philosophy, Special Supplementary Volume 19–52.
Christensen, W., & Hooker, C. (2004). Representation and the meaning of life. In H. Clapin, P. Staines, & P. Slezak (Eds.), Representation in mind: New approaches to mental representation (pp. 41–69). Elsevier.
Coleman, M. (2002). Taking Simmel seriously in evolutionary epistemology. Studies in History and Philosophy of Science, 33, 59–78.
Collier, J., & Hooker, C. (1999). Complexly organised dynamical systems. Open Systems and Information Dynamics, 6, 241–302.
Di Paolo, E. (2003). Organismically-inspired robotics: Homeostatic adaptation and teleology beyond the closed sensorimotor loop. In K. Murase & T. Asakura (Eds.), Dynamical systems approach to embodiment and sociality (pp. 19–42). Advanced Knowledge International.
Eronen, M. I., & Brooks, D. S. (2018). Levels of organization in biology. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Spring 2018 Ed.). https://plato.stanford.edu/archives/spr2018/entries/levels-org-biology/. Accessed 30 Sept 2020.
Farrell, R., & Hooker, C. (2007a). Applying self-directed anticipative learning to science I: Agency and the interactive exploration of possibility space in Ape language research. Perspectives on Science, 15(1), 86–123.
Farrell, R., & Hooker, C. (2007b). Applying self-directed anticipative learning to science II: Learning how to learn across ‘revolutions’. Perspectives on Science, 15(2), 220–253.
Farrell, R., & Hooker, C. (2009). Error, error-statistics and self-directed anticipative learning. Foundations of Science, 14(4), 249–271.
Farrell, R., & Hooker, C. (2013). Design, science and wicked problems. Design Studies, 34(6), 681–705.
Farrell, R., & Hooker, C. (2014). Values and norms between design and science. Design Issues, 30(3), 29–38.
Farrell, R., & Hooker, C. (2015). Designing and sciencing: Reply to Galle and Kroes. Design Studies, 37(1), 1–11.
Hoffmaster, B., & Hooker, C. (2018). Re-reasoning ethics. MIT Press.
Hooker, C. (1995). Reason, regulation and realism. State University of New York Press.
Hooker, C. (2004). Asymptotics, reduction and emergence. British Journal for the Philosophy of Science, 55, 435–479.
Hooker, C. (2009). Interaction and bio-cognitive order. Synthese, 166(3), 513–546. Special edition on interactivism, M. Bickhard (Ed.).
Hooker, C. (2010). Rationality as effective organisation of interaction and its naturalist framework. Axiomathes, 21, 99–172. Special edition on advances in interactivism, M. Bickhard (Ed.).
Hooker, C. (Ed.). (2011a). Philosophy of complex systems (Vol. 10: Handbook of the philosophy of science). Elsevier.
Hooker, C. (2011b). Introduction to philosophy of complex systems. Part A: Towards a framework for complex systems. In C. Hooke (Ed.). (2011a), pp. 3–92.
Hooker, C. (2011c). Conceptualising reduction, emergence and self-organisation in complex dynamical systems. In C. Hooker (Ed.). (2011a), pp. 197–224.
Hooker, C. (2013a). Georg Simmel and naturalist interactivist epistemology of science. Studies in History and Philosophy of Science, Part A, 44(3), 311–317.
Hooker, C. (2013b). On the import of constraints in complex dynamical systems. Foundations of Science, 18(4), 757–780. https://doi.org/10.1007/s10699-012-9304-9
Hooker, C. (2017). A proposed universal model of problem solving for design, science and cognate fields. New Ideas in Psychology, 47(December), 41–48.
Hooker, C. (2018). Re-modelling scientific change: Complex systems frames innovative problem solving. Lato Sensu: revue de la société de philosophie des sciences, 5(1), 4–12.
Hooker, C., & Hooker, G. (2018). Machine learning and the future of realism. In C. Forbes (Ed.), The future of the scientific realism debate: Contemporary issues concerning scientific realism. Spontaneous Generations: A Journal for the History and Philosophy of Science, 9(1), 174–182.
Kauffman, S. (2000). Investigations. Oxford University Press.
Klowden, M. (1995). Blood, sex, and the mosquito: Control mechanisms of mosquito blood-feeding behavior. BioScience, 45, 326–331.
Li, M., & Vitànyi, P. (1990). Kolmogorov complexity and its applications. In J. van Leeuwen (Ed.), Handbook of theoretical computer science. Elsevier.
Maturana, H., & Varela, F. J. (1980). Autopoiesis and cognition: The realization of the living. D. Reidel Publishing.
Merleau-Ponty, M. (1942/1963). The structure of behaviour (Trans. A. L. Fisher). Methuen.
Moreno, A. (2007). A systematic approach to the origin of biological organisation. In F. Boogerd, F. Bruggeman, J.-H. Hofmyer, & H. Westerhoff (Eds.), Systems biology: Philosophical foundations. Elsevier.
Moreno, A., & Mossio, M. (2015). Biological autonomy: A philosophical and theoretical enquiry. Springer.
Moreno, A., Ruiz-Mirazo, K. & Barandiaran, X. (2011). The impact of the paradigm of complexity on the foundational frameworks of biology and cognitive science. In Hooker (2011a).
Nolfi, S. (2011). Behavior and cognition as a complex adaptive system: Insights from robotic experiments. In Hooker (2011a).
Pattee, H. (1993). The limitations of formal models of management, control and cognition. Applied Mathematics and Computation, 56, 111–130.
Pattee, H. (1995). Evolving self-reference: Matter, symbols, and semantic cloaure. Communication and Cognition - Artificial Intelligence, 12(1–2), 9–28.
Pattee, H. (2007). Laws, constraints and the modelling relation - History and interpretations. Chemistry and Bio-diversity, 4, 2272–2278.
Penrose, R. (1989). The emperor’s new mind: Concerning computers, minds and the laws of physics. Oxford University Press.
Popper, K. (1980). The logic of scientific discovery. Hutchinson. (First published as Logik der Forschung, Wien, 1934).
Rayner, A. (1997). Degrees of freedom: Living in dynamic boundaries. World Scientific.
Rosen, R. (1991). Life itself: A comprehensive inquiry into the nature, origin, and fabrication of life. Columbia University Press.
Ruiz-Mirazo, K., & Moreno, A. (2012). Autonomy in evolution: From minimal to complex life. Synthese, 185(1), 21–52.
Ruiz-Mirazo, K., Umerez, J., & Moreno, A. (2008). Enabling conditions for ‘open-ended evolution’. Biology and Philosophy, 23(1), 67–85.
Russell, B. (1927). The analysis of Matter. Kegan Paul.
Simmel, G. (1895). Ueber eine Beziehung der Selektionslehre zur Erkenntnistheorie. Archive fur systematische Philosophie, 1, 34–45. (English translation: part 2 of Coleman 2002).
Smithers, T. (1995). Are autonomous agents information processing systems? In L. Steels & R. Brooks (Eds.), The artificial life route to artificial intelligence: Building situated embodied agents. Lawrence Erlbaum.
Sommerhof, G. (1974). Logic of the living brain. Wiley.
Stewart, J. (1996). Cognition = life: Implications for higher-level cognition. Behavioural Processes, 35, 311–326.
Varela, F. (1979). Principles of biological autonomy. Elsevier/North Holland.
Varela, F. J., Maturana, H. R., & Uribe, R. (1974). Autopoiesis: The organization of living systems, its characterization and a model. BioSystems, 5, 187–196.
von Uexküll, J. (1926). Theoretical biology. Harcourt, Brace.
Acknowledgments
Special thanks to Alvaro Moreno and Matteo Mossio for careful, detailed, and critical appraisal of draft versions and to Hal Brown for challenging comments from a wider philosophical perspective – all of which resulted in a substantially improved essay. The defects remaining surely derive in substantial part from my willful refusal to respond to all comments as their authors intended. With so tricky a topic, I have tried to keep the notion at issue – organization – always at the central focus. In addition, there undoubtedly remain defects not yet appreciated because we all – author and commentators – remain blind to them.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2024 The Author(s)
About this chapter
Cite this chapter
Hooker, C. (2024). On the Organizational Roots of Bio-cognition. In: Mossio, M. (eds) Organization in Biology. History, Philosophy and Theory of the Life Sciences, vol 33. Springer, Cham. https://doi.org/10.1007/978-3-031-38968-9_5
Download citation
DOI: https://doi.org/10.1007/978-3-031-38968-9_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-38967-2
Online ISBN: 978-3-031-38968-9
eBook Packages: Religion and PhilosophyPhilosophy and Religion (R0)