Abstract
Our experience of the world seems to unfold seamlessly in a unitary 3D space. For this to be possible, the brain has to merge many disparate cognitive representations and sensory inputs. How does it do so? I discuss work on two key combination problems: coordinating multiple frames of reference (e.g. egocentric and allocentric), and coordinating multiple sensory signals (e.g. visual and proprioceptive). I focus on two populations whose spatial processing we can observe at a crucial stage of being configured and optimised: children, whose spatial abilities are still developing significantly, and naïve adults learning new spatial skills, such as sensing distance using auditory cues. The work uses a model-based approach to compare participants’ behaviour with the predictions of alternative information processing models. This lets us see when and how—during development, and with experience—the perceptual-cognitive computations underpinning our experiences in space change. I discuss progress on understanding the limits of effective spatial computation for perception and action, and how lessons from the developing spatial cognitive system can inform approaches to augmenting human abilities with new sensory signals provided by technology.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
Introduction
Our experience of the world seems to unfold seamlessly in a unitary 3D space. For this to be possible, the brain has to merge many disparate cognitive representations and sensory inputs. How does it do so? Here, I review work on two key combination problems: coordinating multiple frames of reference (e.g. egocentric and allocentric) and combining multiple sensory signals (e.g. visual and proprioceptive). I focus on two populations whose spatial processing we can observe at a crucial stage of being configured and optimised: children, whose spatial abilities are still developing significantly, and naïve adults learning new spatial skills, such as sensing distance using novel auditory cues.
Coordinating spatial frames of reference
Spatial frames of reference
Spatial relationships can be stored in different frames of reference, with advantages for specific tasks. To open my car door, it is most useful to store where it is relative to my hand (a body- or self- referenced, egocentric representation). In contrast, to find the car in the car park, perhaps from a new viewpoint, it is most useful to store where it is relative to stable external landmarks (an externally referenced, allocentric representation). The brain represents spatial representations with different coordinate frames using different specialised substrates (review, Burgess 2008)—for example, those in body-referenced frames useful for guiding immediate action in parietal cortex (Bremmer et al. 1997), and those in frames using external landmarks in the hippocampus (Hartley et al. 2014).
Development of spatial frames of reference
Since Piaget’s pioneering investigations of spatial cognitive development (Piaget and Inhelder 1956), it has been evident that children achieve competence at egocentric responses and tasks earlier than allocentric ones. Particularly, when egocentric and allocentric responses conflict, young children tend to follow an incorrect egocentric strategy. For example, in studies by Acredolo (Acredolo 1978; Acredolo and Evans 1980), younger infants who learned to turn to one side (e.g. their right) to find a target, and were then moved and rotated 180°, persevered with this now incorrect egocentric response. This points to the multiple challenges of encoding more complex allocentric versus simpler egocentric spatial relationships, updating representations correctly to account for own movement, and selecting the correct reference frame when different frames conflict (more discussion: Nardini et al. 2009, and below).
Development: coordinating multiple reference frames
Most of the time, multiple potential encodings or frames—which may be more or less useful for a specific task—are available. Beginning in 2006, our studies addressed the question when and how multiple reference frames are coordinated in development. In an initial study, 3–6-year olds attempted to recall the locations of objects on an approximately 1m2 board incorporating small surrounding landmarks (Nardini et al. 2006). Board and/or participant were moved between hiding and recall in a factorial design that varied the validity of (1) the self, (2) the wider room, and (3) the small surrounding landmarks as a basis for recall. Children were already competent from age 3 years when self- and/or room-based reference frames were available, but only above chance from 5 years at using the surrounding landmarks alone (and disregarding the other frames). Subsequent modelling of responses indicates that at intermediate ages, children’s responses are a mixture between using the incorrect frames and the correct one (Negen and Nardini 2015). A highly controlled version of the same task using VR—in which children no longer interact with a miniature moving array, but are immersed in the virtual test environment (Negen et al. 2018a) reached the same conclusion. Simple (e.g. body-referenced) representations are reliably used from a young age, but when these are not valid, correctly coordinating and using only relevant landmarks to respond emerges later, at 4–5 years of age.
Development: coordinating multiple landmarks
Tracing the earliest ages at which allocentric recall (i.e. using only external landmarks) is demonstrably above chance identifies a starting point for allocentric abilities, but these very earliest abilities may be based only on very simple or partial information about external landmarks. For example, in Negen et al. (2018a), the earliest above-chance use of the allocentric frame could be explained by encoding position just along one axis of the space—far short of a fully accurate spatial representation. Similarly, allocentric recall that can be based on roughly matching visual features emerges earlier than that requiring strict representation of spatial relationships (Nardini et al. 2009). A VR study of 3-to-8-year olds’ recall with respect to several distinct landmarks asked how abilities to coordinate these develop (Negen et al. 2019a). The study looked for markers of performance beyond that explicable by use of just the single nearest landmark. The results showed that until around 6 years, allocentric performance was supported by use of a single landmark—a strategy better than egocentric, but still subject to significant errors (e.g. mirror reversals). Only after 6 years was there evidence for coordination of multiple landmarks to improve precision and avoid such errors. Interestingly, however, this was also moderated by the complexity of the environment—in an extremely simple (less naturalistic) space, there was earlier evidence for coordination of multiple landmarks.
Coordinating multiple reference frames and landmarks: developmental mechanisms and bottlenecks
These studies reveal crucial computational changes in spatial recall during early life. We see a progression from reliance on simple (body-based/egocentric) encodings, to those using simple elements of the external environment (e.g. single landmarks, or features of landmarks), to those coordinating multiple landmarks. The competence of typical adults at perceiving and acting flexibly in space emerges from this long developmental trajectory. On comparable experimental tasks, clinical groups with spatial difficulties (e.g. Williams Syndrome) appear to remain at levels of development typical of pre-allocentric children (e.g. Nardini et al. 2008a), as do adult hippocampal patients (King et al. 2002). What are the developmental mechanisms, and what bottlenecks hold back younger children (or clinical groups) from flexible spatial recall? The degree to which these changes represent either reshaping of abilities to encode and represent the relevant information (e.g. by the hippocampus), or abilities to correctly select the relevant encoding (disregarding irrelevant cues or reference frames) is one key question for future research. Initial evidence that individual differences linked to inhibitory control are one predictor of performance (Negen et al. 2019a) suggests that not only encoding, but also selection plays a role. Evidence in the same study that a simpler environment shows earlier development also suggests a role for processes of attention and cue selection. These findings raise interesting questions about how closely the present coordination problems in spatial cognitive development are linked to development of more general, central, cognitive capacities, such as inhibition or cognitive control.
Coordinating multiple sensory signals
Multisensory processing of spatial information
We sense the world using multiple channels of sensory input, including visual, auditory, and haptic. The challenge of situating ourselves in space includes coordinating and combining these disparate information sources. For example, for dealing with changes of viewpoint (see above), visual information is useful for detecting the new viewpoint (e.g. using visual landmarks) and potentially for tracking own movement between the different viewpoints (e.g. using optic flow). Non-visual (e.g. vestibular and kinesthetic) information also crucially helps track own movement to account for viewpoint changes (Simons and Wang 1998; Wang and Simons 1999), including during development (Nardini et al. 2006; Negen et al. 2018a). This is evident in the studies just mentioned because when viewpoint changes happen in absence of movement-related information (e.g. a new viewpoint is presented, but the participant did not walk there), accuracy is poorer in adults and takes longer to be above chance in childhood.
Measuring combination of multisensory spatial signals
The evidence reviewed above for the role of movement, as well as vision, comes from spatial tasks that create large cue conflicts. In key test conditions, a viewpoint change is experienced without the corresponding movement—i.e. the environment is rotated in front of the participant, or the participant is virtually ‘teleported’. This leaves unclear the extent to which performance is poor because of (a) the absence of useful movement information, or (b) an incorrect reliance on the (erroneous) movement information that states that no viewpoint change has occurred. We saw that young children just mastering these tasks switch between the latter erroneous strategy and one that correctly disregards movement information (Negen and Nardini 2015), and that performance on a related task is predicted by individual differences in inhibitory control (Negen et al. 2019a). To more clearly determine how spatial signals and cues interact, a more recent approach (Cheng et al. 2007) applies Bayesian decision theory to questions about how spatial information is combined. This avoids selection and conflict problems and also lets us measure the degree to which using two signals together leads to the precision benefits expected for a rational (Bayesian) ideal decision-maker. The approach essentially (see Ernst and Banks 2002; Rohde et al. 2016) varies the availability of cue 1 and cue 2 across conditions (testing cue 1 alone, cue 2 alone, and cues 1 + 2 together) to test for Bayesian precision benefits. It also uses small conflicts (cue 1 vs. cue 2 indicate slightly differing target locations) to measure the relative reliance on (weighting for) each cue.
Combination of multisensory signals for navigation
We applied this approach to a developmental navigation task (Nardini et al. 2008b). Illuminated visual landmarks in an otherwise dark room (‘cue 1’) could potentially be used together with non-visual (vestibular, kinesthetic) movement information (‘cue 2’) to return collected objects directly to their previous locations after walking two legs of a triangle (i.e. triangle completion). A Bayesian decision-maker would be measurably more precise with both cues together than with either alone. While adults met this prediction, children aged 4 and 8 years did not—they were no more precise with two cues together than with the best single cue, and the model that best explained their precision and cue weighting was one in which they selected a single cue to use on any trial, rather than combining (averaging) them. This indicates that issues with development of spatial recall in earlier tasks (e.g. Nardini et al. 2006) did not only reveal an immaturity in selecting the correct representation, but that there are also fundamental immaturities in combining multiple valid signals efficiently when these are available. The finding of efficient or near-optimal spatial cue combination in adults has been replicated and extended (Bates and Wolbers 2014; Chen et al. 2017; Sjolund et al. 2018), while the finding showing immaturity in cue combination long into childhood has been replicated in many tasks, also including more basic (e.g. table-top, non-navigational) spatial information—described next.
Development of spatial combination of multisensory information
Basic abilities to understand multisensory correspondences and to benefit from redundant multisensory information of some kinds are present in early life (Bahrick and Lickliter 2000; Kuhl and Meltzoff 1982). However, a growing body of research shows specifically that the Bayes-like precision benefits adults experience when combining multisensory spatial signals take until around age 10 years of life or later to emerge. As well as not showing multisensory precision gains when navigating (Nardini et al. 2008b), unlike adults (Ernst and Banks 2002), children do not improve their precision at comparing the heights of bars with vision and touch together (Gori et al. 2008), in part because they overweight the less reliable cue. Similarly, unlike adults (van Beers et al. 1999), children do not improve their abilities to localise a point on a table-top with vision and proprioception together (Nardini et al. 2013). Even within the single sense of vision, unlike adults (Hillis et al. 2004), children do not combine two distinct cues to surface orientation (stereo disparity and texture) until the age of 12 years (Nardini et al. 2010); younger children’s behaviour best fits switching between following one cue or the other on any trial.
Development of multisensory spatial combination: mechanisms and bottlenecks
These failures to achieve Bayes-like precision gains during perception long into childhood may at first seem surprising. From a decision-theoretic point of view, children—whose precision at most simple ‘unimodal’ perceptual tasks takes many years to attain adult levels—would especially stand to benefit from efficiently combining the relatively noisy information sources they have. However, to achieve efficient combination, the system must overcome a number of developmental challenges (Nardini and Dekker 2018).
Challenge 1: calibration
First, the different senses or signals need to be correctly calibrated. Initial evidence suggesting that calibration plays a role includes a study in which we found combination of visual and auditory signals to localise targets at below age 8 years in a task that improved unisensory calibration (Negen et al. 2019b).
Challenge 2: appropriate weighting
Second, efficient, Bayes-like combination of signals requires each to be weighted in proportion to its relative reliability, or inverse variance (Ernst and Banks 2002; Rohde et al. 2016). There is evidence for mis-weighting of signals in development, including overweighting of unreliable (Gori et al. 2008) and even completely irrelevant (Petrini et al. 2015) cues.
Challenge 3: neural substrates for efficient combination
A third challenge—not necessarily distinct from the above two, but expressing them at a different level of analysis, is maturation of the still poorly understood neural substrates for efficient averaging of sensory signals. It is clear that combination takes place at multiple levels of a hierarchy of sensory processing and decision-making (Rohe and Noppeney 2016), including in early ‘sensory’ areas (Gu et al. 2008). Our initial work using fMRI shows that immaturities in the earliest component of this network accompany inefficient cue combination. ‘Automatic’ combination of visual cues to 3D layout (surface slant) in early sensory (‘visual’) areas, for stimuli displayed in the background while participants carry out a different task at fixation, is present in adults (Ban et al. 2012) and in 10-to-12-year olds, but not 6-to-10-year olds (Dekker et al. 2015). Thus, acquiring efficient multisensory combination abilities for spatial judgments would seem to depend on developmental reshaping of sensory processing at a very early level.
Enhancing human perception and action in space using new sensory signals
Enhancing human perception and action in space: opportunities
In this final section, I sketch out applications of the work reviewed above to the newer domain of optimising human perception and action using ‘new’ sensory signals—for example, enhancing spatial abilities using new devices or sensors (Nagel et al. 2005). There is increasing evidence that the organisation of neural substrates for perception and action in space can be remarkably flexible (Amedi et al. 2017). For example, some blind individuals are expert at using click echoes to sense spatial layout, recruiting ‘visual’ cortex for perception of layout through sound (Thaler et al. 2011). Advances in wearable technology also make it increasingly feasible to provide people with novel sensors and signals. Devices to substitute or augment spatial perception via sound or vibrotactile cues have been developed and show promising signs of everyday use and reshaping perception (Maidenbaum et al. 2014). Which challenges must be met in order for approaches such as these to be integrated effectively into people’s everyday spatial cognitive repertoire?
Enhancing human perception and action in space: challenges
There are key parallels between children first learning to coordinate natural sensory signals (Sect. “Coordinating multiple sensory signals”, above) and people of all ages learning to coordinate newly learned sensory skills into their existing multisensory repertoire. As an example, consider learning to use a new device that translates distance or depth to an auditory signal such as pitch. The three challenges identified above are also crucial here: first, achieving an accurate calibration of the new sense to the familiar representation of space, second, appropriately weighting the new signal with the old one when both provide useful information, third, at the neural level of analysis, being able to implement these processes in highly efficient circuits supporting subjectively effortless or ‘automatic’ perception (e.g. those in early ‘sensory’ areas).
Enhancing human perception and action in space: initial findings
With these questions and issues in mind, we have embarked on new studies of the scope to enhance human perception and action in space using new sensory signals. In an initial study (Negen et al. 2018b), in a VR environment, we trained healthy adults to use an echo-like auditory cue, together with a noisy visual cue, to judge distance to an object. Within five short (approx. 1-h) training sessions, we found evidence for efficient Bayes-like combination, including improved precision (albeit falling short of the Bayes-optimal improvement) and reweighting with changing cue reliabilities. Recalling that children often do not show combination even with familiar, natural cues (Nardini et al. 2008b), this suggests that the mature perceptual-cognitive system may bring some advantages to novel cue combination problems and offers a promising outlook on flexibly enhancing human spatial abilities. However, many questions remain—including the prospects and training time course for eventually embedding such new abilities in low-level sensory processing, most likely to support subjectively effortless or ‘automatic’ perception.
Enhancing human perception and action in space: future directions
Ongoing work is investigating the manner in which newly acquired spatial skills become embedded in perception. For example, there is initial evidence that within ten training sessions, and with another visual cue with a more natural form of noise (uncertainty), participants still do not attain Bayes-optimal performance; however, the skill enhances speed (as well as accuracy) of responses and resists verbal interference (Negen et al. 2021). Sensitive model-based tests of some of these abilities are assisted by analysis methods beyond those in the classic cue combination literature (Aston et al. 2021). Key future directions include investigating extended training, neural substrates (using fMRI), motor/action tasks, and other perceptual problem domains (e.g. sensing object properties, as well as their spatial locations).
Summary and conclusions
The research described here has addressed two combination problems underlying perception in action in space: coordinating multiple reference frames and coordinating multiple sensory signals. Our understanding of development in these domains has been improved by adoption of a model-based approach, which, for example, compares performance with the predictions for an ideal (Bayesian) decision-maker. Both systems show substantial and extended development during childhood. In the domain of reference frames, key outstanding questions include the extent to which developmental improvements in abilities to either represent or select relevant information play a crucial role, and the extent to which these can be linked to maturation of specific brain systems and/or development of broader cognitive abilities. In the domain of multiple sensory signals, key outstanding questions include factors limiting efficient combination of signals in childhood, and the extent to which these can be tied to specific elements of information processing models and/or maturation of specific neural substrates. There are important parallels between the information processing challenges for children using their familiar senses and those for adults learning to use new sensory signals. Therefore, developmental research also has an important role in guiding the search for optimal approaches to enhancing human spatial abilities using technology.
References
Acredolo LP (1978) Development of spatial orientation in infancy. Dev Psychol 14(3):224–234. https://doi.org/10.1037/0012-1649.14.3.224
Acredolo LP, Evans D (1980) Developmental changes in the effects of landmarks on infant spatial behavior. Dev Psychol 16(4):312–318. https://doi.org/10.1037/0012-1649.16.4.312
Amedi A, Hofstetter S, Maidenbaum S, Heimler B (2017) Task selectivity as a comprehensive principle for brain organization. Trends Cogn Sci 21(5):307–310. https://doi.org/10.1016/j.tics.2017.03.007
Aston S, Negen J, Nardini M, Beierholm U (2021) Central tendency biases must be accounted for to consistently capture Bayesian cue combination in continuous response data. Behav Res Methods. https://doi.org/10.3758/S13428-021-01633-2
Bahrick LE, Lickliter R (2000) Intersensory redundancy guides attentional selectivity and perceptual learning in infancy. Dev Psychol 36(2):190–201. https://doi.org/10.1037/0012-1649.36.2.190
Ban H, Preston TJ, Meeson A, Welchman AE (2012) The integration of motion and disparity cues to depth in dorsal visual cortex. Nat Neurosci 15:636–643. https://doi.org/10.1038/nn.3046
Bates SL, Wolbers T (2014) How cognitive aging affects multisensory integration of navigational cues. Neurobiol Aging 35(12):2761–2769. https://doi.org/10.1016/j.neurobiolaging.2014.04.003
Bremmer F, Duhamel JR, Ben Hamed S, Graf W (1997) The representation of movement in near extra-personal space in the macaque ventral intraparietal area (VIP). In: Thier P, Karnath HO (eds) Parietal lobe contributions to orientation in 3D space. Springer, Heidelberg
Burgess N (2008) Spatial cognition and the brain. Ann N Y Acad Sci 1124(1):77–97. https://doi.org/10.1196/annals.1440.002
Chen X, McNamara TP, Kelly JW, Wolbers T (2017) Cue combination in human spatial navigation. Cogn Psychol 95:105–144. https://doi.org/10.1016/j.cogpsych.2017.04.003
Cheng K, Shettleworth SJ, Huttenlocher J, Rieser JJ (2007) Bayesian integration of spatial information. Psychol Bull 133(4):625–637. https://doi.org/10.1037/0033-2909.133.4.625
Dekker TM, Ban H, van der Velde B, Sereno MI, Welchman AE, Nardini M (2015) Late development of cue integration is linked to sensory fusion in cortex. Curr Biol 25(21):2856–2861. https://doi.org/10.1016/j.cub.2015.09.043
Ernst MO, Banks MS (2002) Humans integrate visual and haptic information in a statistically optimal fashion. Nature 415:429–433. https://doi.org/10.1038/415429a
Frances Wang R, Simons DJ (1999) Active and passive scene recognition across views. Cognition 70(2):191–210. https://doi.org/10.1016/S0010-0277(99)00012-8
Gori M, Viva MD, Sandini G, Burr DC (2008) Young children do not integrate visual and haptic information. Curr Biol 18(9):694–698. https://doi.org/10.1016/j.cub.2008.04.036
Gu Y, Angelaki DE, Deangelis GC (2008) Neural correlates of multisensory cue integration in macaque MSTd. Nat Neurosci 11:1201–1210. https://doi.org/10.1038/nn.2191
Hartley T, Lever C, Burgess N, O’Keefe J (2014) Space in the brain: how the hippocampal formation supports spatial cognition. Philos Trans R Soc Lond Ser B 369:20120510. https://doi.org/10.1098/rstb.2012.0510
Hillis JM, Watt SJ, Landy MS, Banks MS (2004) Slant from texture and disparity cues: optimal cue combination. J Vis. https://doi.org/10.1167/4.12.1
King JA, Burgess N, Hartley T, Vargha-Khadem F, O’Keefe J (2002) Human hippocampus and viewpoint dependence in spatial memory. Hippocampus 12(6):811–820. https://doi.org/10.1002/hipo.10070
Kuhl PK, Meltzoff AN (1982) The bimodal perception of speech in infancy. Science 218(4577):1138–1141. https://doi.org/10.1126/science.7146899
Maidenbaum S, Hanassy S, Abboud S, Buchs G, Chebat DR, Levy-Tzedek S, Amedi A (2014) The “EyeCane”, a new electronic travel aid for the blind: technology, behavior & swift learning. Restor Neurol Neurosci 32(6):813–824. https://doi.org/10.3233/RNN-130351
Nagel SK, Carl C, Kringe T, Märtin R, König P (2005) Beyond sensory substitution—learning the sixth sense. J Neural Eng 2(4):R13–R26. https://doi.org/10.1088/1741-2560/2/4/R02
Nardini M, Dekker TM (2018) Observer models of perceptual development. Behav Brain Sci 41:e238. https://doi.org/10.1017/S0140525X1800136X
Nardini M, Burgess N, Breckenridge K, Atkinson J (2006) Differential developmental trajectories for egocentric, environmental and intrinsic frames of reference in spatial memory. Cognition 101(1):153–172. https://doi.org/10.1016/j.cognition.2005.09.005
Nardini M, Atkinson J, Braddick O, Burgess N (2008a) Developmental trajectories for spatial frames of reference in Williams syndrome. Dev Sci 11(4):583–595. https://doi.org/10.1111/j.1467-7687.2007.00662.x
Nardini M, Jones P, Bedford R (2008b) Development of cue integration in human navigation. Curr Biol 18(9):689–693. https://doi.org/10.1016/j.cub.2008.04.021
Nardini M, Thomas RL, Knowland VCP, Braddick OJ, Atkinson J (2009) A viewpoint-independent process for spatial reorientation. Cognition 112(2):241–248. https://doi.org/10.1016/j.cognition.2009.05.003
Nardini M, Bedford R, Mareschal D (2010) Fusion of visual cues is not mandatory in children. Proc Natl Acad Sci USA 107(39):17041–17046. https://doi.org/10.1073/pnas.1001699107
Nardini M, Begus K, Mareschal D (2013) Multisensory uncertainty reduction for hand localization in children and adults. J Exp Psychol Hum Percept Perform 39(3):773–787. https://doi.org/10.1037/a0030719
Negen J, Nardini M (2015) Four-year-olds use a mixture of spatial reference frames. PLoS ONE 10(7):e0134973. https://doi.org/10.1371/journal.pone.0131984
Negen J, Heywood-Everett E, Roome HE, Nardini M (2018a) Development of allocentric spatial recall from new viewpoints in virtual reality. Dev Sci 21(1):e12496. https://doi.org/10.1111/desc.12496
Negen J, Wen L, Thaler L, Nardini M (2018b) Bayes-like integration of a new sensory skill with vision. Sci Rep 8(1):16880. https://doi.org/10.1038/s41598-018-35046-7
Negen J, Ali LB, Chere B, Roome HE, Park Y, Nardini M (2019a) Coding locations relative to one or many landmarks in childhood. PLoS Comput Biol 15(10):e1007380. https://doi.org/10.1371/journal.pcbi.1007380
Negen J, Chere B, Bird L-A, Taylor E, Roome HE, Keenaghan S, Thaler L, Nardini M (2019b) Sensory cue combination in children under 10 years of age. Cognition 193:104014. https://doi.org/10.1016/j.cognition.2019.104014
Negen J, Bird L-A, Slater H, Thaler L, Nardini M (2021) A new sensory skill shows automaticity and integration features in multisensory interactions (pre-print). BioRxiv. https://doi.org/10.1101/2021.01.05.425430
Petrini K, Jones PR, Smith L, Nardini M (2015) Hearing where the eyes see: children use an irrelevant visual cue when localizing sounds. Child Dev 86(5):1449–1457. https://doi.org/10.1111/cdev.12397
Piaget J, Inhelder B (1956) The child’s conception of space. Routledge & Kegan Paul, London
Rohde M, van Dam LCJ, Ernst M (2016) Statistically optimal multisensory cue integration: a practical tutorial. Multisens Res 29(4–5):279–317
Rohe T, Noppeney U (2016) Distinct computational principles govern multisensory integration in primary sensory and association cortices. Curr Biol 26(4):509–514. https://doi.org/10.1016/j.cub.2015.12.056
Simons DJ, Wang RF (1998) Perceiving real-world viewpoint changes. Psychol Sci 9(4):315–320. https://doi.org/10.1111/1467-9280.00062
Sjolund LA, Kelly JW, McNamara TP (2018) Optimal combination of environmental cues and path integration during navigation. Mem Cognit 46(1):89–99. https://doi.org/10.3758/s13421-017-0747-7
Thaler L, Arnott SR, Goodale MA (2011) Neural correlates of natural human echolocation in early and late blind echolocation experts. PLoS ONE 6(5):e20162. https://doi.org/10.1371/journal.pone.0020162
van Beers RJ, Sittig AC, Gon JJ (1999) Integration of proprioceptive and visual position-information: an experimentally supported model. J Neurophysiol 81:1355–1364. https://doi.org/10.1152/jn.1999.81.3.1355
Acknowledgements
Thanks to all my colleagues and collaborators, especially James Negen, Tessa Dekker, Ulrik Beierholm, and Lore Thaler.
Funding
This work was funded by UK Economic and Social Research Council Grants RES-062-23-0819, RES-061-25-0523, ES/N01846X/1; Grant 220020240 from the James S. McDonnell Foundation 21st Century Science Scholar in Understanding Human Cognition Program; Research Project Grant RPG-2017-097 from the Leverhulme Trust; and the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant Agreement No. 820185).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The author declares that he has no conflict of interest.
Ethical approval
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional research committee and with the 1964 Helsinki Declaration and its later amendments. This article does not contain any studies with animals performed by the author.
Informed consent
Informed consent was obtained from all individual participants, or (as appropriate) their parents or caregivers.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This article is a contribution to the proceedings of the “8th International Conference on Spatial Cognition: Cognition and Action in a Plurality of Spaces” (ICSC 2021).
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Nardini, M. Merging familiar and new senses to perceive and act in space. Cogn Process 22 (Suppl 1), 69–75 (2021). https://doi.org/10.1007/s10339-021-01052-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10339-021-01052-3