Abstract
Neuromorphic engineering is concerned with the emulation of biological learning and memory processes in hardware. The use of memristive devices, i.e., non-volatile memory devices, has given this field a significant boost in the last decade. However, most of today's efforts are aimed at the hardware implementation of artificial intelligence computational methods, while the emulation of biological computational methods is less pursued. In the latter, however, there is enormous potential for information technology. For this, however, network-dependent cognitive functionalities from biology must be identified and transferred to technical systems. In this chapter, we will show a possible approach. Using the hippocampus, which is the central structure of the mammalian brain responsible for learning new information, as an example, it is shown how elementary cognitive functions can be investigated by behavioral tests in humans and how their functionality can be broken down to the network dependent functionalities. Furthermore, it is shown how these functionalities can be technically reproduced in a memristive network model.
You have full access to this open access chapter, Download chapter PDF
Similar content being viewed by others
Keywords
1 Introduction
Neuromorphic engineering goes back to Carver Mead, who in the 1980s used the then new silicon technology to emulate biological circuits of nervous systems [1]. His work was motivated by the hope to better understand the functions of the brain and to partly reproduce complex processes in technical systems. In addition to Mead, the physicist and Nobel laureate Richard Feynman and the mathematician John Hopfield were among the fathers of neuromorphic engineering. With the phrase on his blackboard at time of his death "what I cannot create I do not understand", Richard Feynman provided the motivation for generations of scientists in the field. In recent years, neuromorphic engineering has experienced an enormous technological interest, as it also enables applications for the field of artificial intelligence (AI) by providing hardware that can be better tailored to the needs of AI [2]. In this context, neuromorphic systems are now seen as hardware realizations of biologically-inspired computational architectures, and the field has expanded significantly, especially in the last decade [2, 5, 56].
Important components for the realization of biological computational architectures in hardware are memristive devices (also called memristor), which allow to emulate local biological learning paradigms in hardware, in particular to emulate synaptic plasticity [2,3,4]. Memristive devices are two-terminal electronic devices that change their resistance by applying electrical signals and retain the change even after the electrical signals are switched off [5]. Thus, this class of devices belongs to non-volatile memory devices and, as we will show, are ideal for use as artificial synapses in neuromorphic systems [6].
The change in coupling strength between neurons, i.e., the synaptic plasticity, is the essential building block for learning and memory processes in biological nervous systems and forms the cellular correlate of dynamic biological information processing [7], as we will consider in more detail in Sect. 2.1. Thus, the emulation of cellular learning forms by means of memristive devices is central to the development of neuromorphic systems and requires the emulation of critical neural information processes within memristive devices. However, this directly raises the question of what the requirements for the devices are and which functions are essential for the network-level processes of biological information processing and memory formation. Therefore, it is indispensable to acknowledge the neuronal network architecture and learning rules to design suitable memristive devices. This poses, however, further questions to biology as to how information processes operate in our brain. Here, it is the global network level that allows higher forms of learning and determines our behavior and actions [64].
Even though scientists have been researching for centuries how our brain works and what the secrets of information processing are, the functioning of the brain is only incompletely understood and we are far from a true comprehension of biological information processing [64]. However, in neurobiology, significant progress has been made in the last decades in exploring information processing and specifically memory formation at the mesoscopic level of different networks in our brain [8]. The progress made in the field of neurobiology has not only provided a deeper understanding of the computational processes underlying cognitive functions but also revealed the structural foundations of neural circuits and network architectures responsible for these processes. These developments offer an excellent opportunity to investigate the relationship between the structure and function of cognitive processes, with a particular focus on the memory function, as a model for technological emulation and modeling. In this context, the hippocampus, which plays a critical role in memory formation (as described in Sect. 2.2), has emerged as an especially promising candidate for the emulation of neurobiological learning pattern with memristive devices. In addition to a large number of anatomical findings, hippocampus-associated and network-dependent memory functions have been identified (e.g. spatial navigation, pattern completion and pattern separation) that can be attributed to distinct neuronal networks in the hippocampus [9]. This allows the investigation of learning and memory functions on a global level in the form of behavioral tests in humans, which is subject of Sect. 3. However, this poses the challenge of using suitable tests to study forms of learning and to obtain as detailed information as possible about the performance of the hippocampus [9]. Starting from those functional relationships of global network-based learning forms, we will show in Sect. 4 how these can be reproduced within neuromorphic networks based on memristive devices. We will address particularly the question which device properties are important to mimic synaptic plasticity for the emulation of network-based learning forms.
Thus, in this book chapter, we want to address the original motivation of neuromorphic engineering and show how biological paradigms of network-dependent learning forms can be adapted for emulation with memristive devices. Furthermore, we demonstrate how global functionalities of memory formation can be extracted and reconstructed in a way that they can be described by simple network structures. The aim is to show how a conceptional bridge that can be built from the multi-dimensional global network level to the cellular level. For this purpose, we follow the systematics shown in Fig. 1 and demonstrate how cellular learning forms can be emulated so that it can be transferred within a multidimensional network level, i.e., how one can move from the microstructure of synapses and neurons to a mesoscopic structure of only a few connected neurons to a macroscopic structure of a nervous system.
2 Neurobiological Learning Principles
2.1 Cellular Learning Paradigms
Basis of learning and memory are activity-dependent changes of the connections between individual neurons. These changes are achieved by a temporary strengthening or diminution of synaptic connections and are referred to as synaptic plasticity [7]. Synaptic plasticity describes an increase in the efficiency of a synaptic transmission through repeated or persistent activity of the connected input cells.
Long-term potentiation (LTP) is the classic paradigm of synaptic plasticity and is regarded as the cellular basis for memory formation. This cellular phenomenon was first described by Bliss and Lømo in 1973 in the CA3Footnote 1 area of the hippocampus [10]. However, most experiments on LTP have been performed at the junction between CA3 and CA1 areas of the hippocampus [65]. Thereby, a short, high-frequency stimulation of axonsFootnote 2 connecting areas CA3 and CA1 can elicit a sustained enhancement of the excitatory postsynaptic response potential (EPSP) (see Fig. 2). The transmission of these potentials from CA3 to CA1 occurs via the neurotransmitter glutamate to voltage-dependent NMDA (N-methyl-D-aspartate) receptors, whereby the glutamate influx triggers the EPSP at the synapse between Schaffer collaterals and CA1 pyramidal cells. This high-frequency stimulation triggering an EPSP leads to a persistent strengthening of CA1 synapses and thus producing a long-lasting increase in signal transmission between two neurons. This elicited synaptic plasticity makes a crucial contribution to memory formation regarding LTP acting as a surrogate of information storage in the central nervous system (CNS) [11]. In this process of input-dependent increased cellular excitability, the synaptic transmission efficiency is increased over hours to days via a functional amplification and weighting of synaptic connections. In contrast to LTP, a long-term depression (LTD), in the sense of a reduction in the strength of synaptic transmission, can be evoked by low-frequency stimulation. Four principal properties underlie LTP/LTD (see Fig. 2): (i) input specificity, i.e., LTP only occurs at this synaptic connection, (ii) associativity, i.e., simultaneous activity at a stronger synaptic connection enables LTP at an associatively linked weaker synapse, (iii) cooperativity, i.e., LTP can be evoked by cooperative activity of input signals, and (iv) persistence, synaptic transmission is increased over hours to weeks.
NMDA receptors associated with LTP are also thought to be closely linked to the formation of higher-order network activity i.e., hippocampal place cell representations leading to cognitive maps, and theta rhythm [12, 13]. It has been shown that learning indeed directly induces LTP processes in CA1 [11]; also, an impediment to the maintenance of LTP results in a disruption of spatial memory of already stored information [14]. Importantly, these fundamental mechanisms of memory formation in the CA1 region are also putatively disrupted in human neurological memory disorders, such as transient global amnesia and Alzheimer's disease [15].
Another correlate of synaptic plasticity is spike-time-dependent plasticity (STDP) and paired-pulse facilitation (PPF). In these plastic processes, augmentation of synaptic transmission is achieved by coupling input signals within a critical time window of a few to several hundred milliseconds. A detailed description of STDP is given in Sect. 4.
2.2 Network Dependent Learning Paradigms
The previously discussed mechanisms of synaptic plasticity are the basis for memory and learning in the brain. However, memory and learning, i.e., memories and events within a temporal-local framework, require more complex network structures. The biological substrates of various types of memory can be assigned to different areas of the brain [16]. In particular, episodic memory content of our personal experiences is critically reliant on the hippocampus [17, 18]. An effective memory system (i.e. minimal interference and maximal capacity) must provide at least two cognitive functions: first, the rapid storage of experiences as individual events (thereby avoiding the ‘overwriting’ of similar information—‘catastrophic interference’), and second, the retrieval of those memories, when similar events are encountered [19].
In this context, two functions of hippocampus, namely pattern separation and pattern completion are highlighted here, as they are essential cognitive processes for the encoding and retrieval of episodes [19, 20]. Theories regarding pattern separation and completion processes that derived from computational approaches have been consistently supported by studies in rodents (see 21 for a review). Recent data recorded in humans using modern imaging techniques show that pattern completion and separation play a critical role in human (and other mammalian) learning and are subject to aging or degenerative processes, such as in Alzheimer's disease [21, 22].
A schematic representation of pattern separation and pattern completion is shown in Fig. 3a. Pattern completion allows incomplete representations to be ‘completed’ by previously stored representations. Pattern separation is the process of representing and storing similar representations uniquely and non-overlapping (orthogonalized). The biological significance of this information discrimination is that new information does not overwrite similar, previously stored information. Overwriting would lead to catastrophic interference and ultimately to no new learning.
A schematic representation of the network topology of the individual hippocampal fields is shown in Fig. 3b. The hippocampal dentate gyrus (DG) and the CA3 field have been attributed the function of pattern separation and completion, respectively. Recurrent axon collaterals serve as the basis of pattern completion, which in turn enable auto-association networks [23]. In detail, information is routed from the entorhinal cortex (EC) directly to the subnetworks in the DG and CA3 via the perforant path. Additionally, these subnetworks receive further projections via their own recurrent collaterals. The network in CA3 additionally receives information about the mossy fibers from the granule cells of the DG. In particular, the distinct recurrent network in CA3 serves as an auto-associative network that enables the completion of previously stored information against the background of the presentation of incomplete stimuli. The mossy pathway from the DG to CA3 serves to establish separate pattern representation in the context of new learning and to reduce interference, whereas the direct input from EC enables the retrieval or presentation of known information. Animal experimental data provide evidence that dentate gyrus networks are necessary for pattern separation, whereas CA3 networks are critical for pattern completion [21]. Finally, the CA1 compares new representations from EC with the predicted representations from CA3 and has therewith a comparator function (cf. Fig. 3b [39]).
Theories based on computational models of pattern separation processing and experimental rodent studies that measure the behavioural outcome on the basis of hippocampal place cell remapping both corroborate that the DG/CA3 network is critically involved in pattern separation [24]. Studies in humans support this finding by means of fMRI investigations that measured the activity of hippocampal areas during behavioural paradigms that tax pattern separation [22, 25].
This network function is further characterized by the fact that the represented network circuits are arranged multilayered in three-dimensional space to cause an increase in computational capacity. The pattern separation function is favored by adhering to the principle of sparse connectivity from the DG to CA3 cells (46 inputs per C3 cell) [9]. Furthermore, LTP for augmentation or deaugmentation of synaptic transmission is required for the storage of information in these networks (see Fig. 3b). Also, these networks show a significant fault tolerance (graceful degradation), in the sense that errors due to lost components can be compensated by the distributed representation.
3 Investigating Hippocampal Functions in Animals and Humans
High demands are placed on our memory system in daily life due to constantly changing environmental conditions that require a continuous mnemonic processing. As shown in Sect. 2.2, this functionality is mapped in the hippocampus by the cognitive network-dependent functions of pattern completion and pattern separation. These functions reduce interference between memories and generalizations about similar events, thus contributing to memory formation [21]. The challenge, however, is the study of these functions and, in particular, the identification of network-dependent mechanisms with respect to memory performance, as sketched in Fig. 1. In particular, it is challenging to elucidate the role of hippocampal subfield processing in pattern separation and -completion in humans.
In this section, we will show to what extent this can be optimized by applying behavioural tests addressing the human non-semantic memory. Here, we will first discuss classic memory tests and especially the Mnemonic Similarity Task (MST, [27]). We further show how behaviour in a pattern separation task can be studied in two human hippocampal lesion models: first, the selective CA1 subfield lesions in amnesia and, second, preferential neurodegeneration in DG/CA3 subfields in a patient cohort with a rare inflammation of the brain (LGI1 encephalitis). These natural hippocampal lesion models help to examine and understand a causal relationship between anatomical structures and pattern separation performance. The development of an alternative to the MST using sensory stimuli instead of depictions of objects, the Visual Sensory Memory Task (VSMT), then concludes this section.
3.1 Mnemonic Similarity Task
Classic standard recognition tests feature only two types of stimuli, old and new stimuli (see, e.g., 26). These two types of stimuli are typically called “Repeats” (for the old stimuli) and “Foils” (for the new stimuli). Inspired by concepts from computational neuroscience, new concepts of memory subfunctions came up, including pattern separation and completion. Behavioural pattern separation in humans is commonly measured by means of specific match-to-sample tasks that include a third type of stimulus, so-called “Lures” [21, 22]. Lures are stimuli that are similar to but not identical with old stimuli. In due course participants are now given three response possibilities, namely, “old”, “new”, and “similar”.
An established test comprising lures is the Mnemonic Similarity Task (MST, [27], see Fig. 3.1). This memory test comprises an encoding phase of items of everyday objects and a retrieval phase (see Ref. [27] for examples). During encoding, the participant is asked to classify these stimuli as either indoor or outdoor objects. At that point in time, the participant is not aware that they will have to remember the stimuli at a later point in time. During recall, the participant is presented with old, new, and similar stimuli and given the corresponding three response possibilities. Supposedly, those similar lures tax hippocampal pattern separation so that correctly identifying lures as similar suggests successful pattern separation abilities, whereas confusing similar lures with their corresponding targets would indicate a bias towards pattern completion. The advantage of the MST is the usage of everyday objects that allows the application in patients with neurological disease and aging participants.
3.2 Human Hippocampal Lesion Models
Memory impairment is commonly caused by an impairment of hippocampal functions due to neurological disorders or aging [28, 29]. In the following section, the aims are to show the mechanistic contribution of the human hippocampus to pattern separation and to demonstrate the neurobiological processes within the hippocampus during consolidation of mnemonic information. Consolidation refers to the (time-dependent) stabilization of a memory after initial acquisition (encoding) into a long-lasting form. To study these subjects, we investigated natural hippocampal lesion models in memory impaired patients with selective hippocampal damage.
Although regional neural activity of the hippocampus can be tested using functional MRI (see Ref. 25 for an example), those studies lack information regarding mechanistic aspects of causality about the subfield-specific computational processes and the causal role of hippocampal structure and its function. Therefore, we examined two hippocampal lesion models, where specific hippocampal subfields are impaired due to neurological diseases, as shown in Fig. 5.
In Study I [38], the transient global amnesia (TGA) served as a model for a selective disruption of hippocampal CA1 neurons. A TGA is characterized by an abrupt cognitive deficit limited to a anterograde amnesia in the acute phase that resolves within 24 h. Typically, focal lesions restricted to area CA1 can be detected in MR-Imaging [30, 31]. Hence, it was suggested that a selective impairment of CA1 during TGA causes a deficit in pattern separation.
Study II [43] aimed at further elucidating the causal role of hippocampal subfield contributions to pattern separation. Here, an extremely rare patient cohort was studied. Patients with an inflammatory brain disorder positive for LGI1 antibodies were examined. These patients develop limbic (-hippocampal) encephalitis with persisting memory deficits (so-called LGI1 encephalitis; 32, 33) and have structural damage to the hippocampal system [34, 35]. The aim of this study was to investigate the pattern separation performance from DG and CA3 that are predominantly affected by neuroinflammatory changes due to LGI1 encephalitis [36, 37]. The hypothesis suggested that inflammatory lesions within the DG and CA3 subfields correlate to hippocampal pattern separation.
The role of the hippocampal CA1 networks in pattern separation and recognition memory.
Here, we investigated the critical relay function of CA1 neurons in pattern separation performance using TGA as natural lesion model of a CA1 deficit [38]. Information processing within CA1 is characterized by the comparison of dual afferent projections—from EC via the perforant path and from CA3 via the Schaffer collaterals [19, 23, 39]. The integration of those two projections within CA1 is assumed to facilitate an immediate retrieval and consolidation in neocortical long-term stores [9]. With regard to the contribution of CA1 neurons to pattern separation processes in humans, our results complement the current concept developed in computational models and experimental animal models [40]. Pattern separation function is relayed and facilitated by the DG that is in turn assumed to decorrelate overlapping memories by sparse coding of neural signals from EC to CA3 [9]. For the transfer of mnemonic information to extra-hippocampal areas, CA3 projects to area CA1, the main hippocampal output area [39, 41]. The selective CA1 dysfunction caused an impairment in the transmission of the then-separated information from the DG/CA3 network to the neocortex resulting in ineffective pattern separation performance on the behavioral level [38]. This suggests that CA1 does not perform pattern separation on the neural level per se, but relays memory information from DG/CA3 networks to neocortical areas. Complementing the results of our studies, this assumption applies to the result of the dependence of pattern separation performance on hippocampal DG volume, but a weaker association to the volume of CA1 [38, 43]. The role of recognition memory by the CA1 volume can be conceptualized by the central position of CA1 as the functional readout of hippocampal circuit projections. By measuring hippocampal volume in LGI1 encephalitis, we showed that the volume of CA1 was the best predictor of recognition memory [43]. The integration of the dual afferent projections from EC and CA3 facilitates the restoration of a memory trace and thus recognition of an environmental cue [42]. Together, these results corroborate the view that CA1 is involved in both pattern separation and recognition memory processes. The functional readout of the hippocampal circuit to neocortical areas involved in hippocampus-dependent memory formation is thus highly dependent on the dynamics within the subnetworks.
Dentate Gyrus Networks in Pattern Separation
Theoretical models state that the DG performs pattern separation by the transformation of overlapping input patterns into distinct, non-overlapping representations [17, 20]. Hanert et al. [43] showed evidence that the DG volume was the best predictor of behavioral pattern separation compared to the volume of regions CA2/3 and CA1. This functional model of the DG in pattern separation processes has been confirmed by electrophysiological recordings in rodents [24]. Evidence for a separation-like activity within the DG in humans has been provided by high-resolution fMRI during a mnemonic similarity recognition paradigm [25]. These results complement our findings by presenting a structure–function relationship between pattern separation and the DG. These findings are also in accordance with previous studies that demonstrated a greater volume of the DG to be associated with a better discrimination of overlapping items [44, 45].
In summary, our results in neurological patients clarified the role of the human hippocampus and its specific subfield contributions to pattern separation and memory consolidation. It was found that the hippocampal DG as well as intact CA1 neurons are essential for pattern separation in humans. We also demonstrated that pattern separation was best predicted by the volume of the DG, whereas recognition memory was stronger associated with the volume of CA1. However, we also found that an impairment due to a lesion restricted to CA1 neurons compromised pattern separation performance. These results emphasize and refine the current view on hippocampus-dependent memory processing within the hippocampal DG as a critical ‘pattern separator’, and CA1 essentially involved in transferring the mnemonic output to neocortical long-term stores.
3.3 The Visual Sensory Memory Task (VSMT)
The hippocampus plays an important role when information is transferred from short-term to long-term memory. In principle, that could be any sort of information. Here we will focus on the difference between categorical-semantic and acategorical-sensory, non-semantic information.
During information processing, the original sensory information is possibly enriched with semantic content: The letters of the words of a poem are initially nothing but a pattern of black and white areas. This sensory black-and-white pattern is processed to be interpreted as letters, the letters form words, and the words transfer meaning. Likewise, the sensory colored patterns of the stimuli of the MST (see Fig. 4.) gain meaning as they are processed, interpreted, and recognized. The four-leaf clover, for instance, could be seen as a sign of good luck by some persons. Semantic content is organized in hierarchical categories: A four-leaf clover is part of the category clover, which is part of the category plants.
In the presence of such semantic content, the original sensory form of the input loses its importance; memorizing this information is based more on the higher representations rather than on the lower ones [46]. The exact amount and content of the meaning associated with such semantic stimuli might, however, vary enormously between persons; a florist might associate quite different things when seeing a picture of a four-leaf clover than a rabbit breeder. This introduces a massive variation of retention performance which is not under the control of the experimenter.
Here we opted for non-semantic sensory information. This type of information shows behavioral characteristics identical to those known for semantic information [47]. It can thus serve as an example for the type of information typically treated by the hippocampus. Void of semantic information organized in varying hierarchical categorical structures, it eliminates the variance in retention performance due to varying amount, type, and linkage of associations. Furthermore, with categorical stimuli, also similarity judgements are subject to a multitude of uncontrollable influences. A zoologist might judge the similarity of two seahorse depictions differently from a layman. A first tentative neuromorphic model of basic hippocampus function should not have to deal with high-level representations acting on the perceived similarity of two stimuli. This was a major reason to consider sensory stimuli.
A second and just as important reason was the parametrical control of similarity by mixing sensory stimuli without loss of validity. Sensory stimuli can be mixed to create intermediate stimuli with any desired degree of similarity. This is of specific importance for studies sizing pattern completion and separation, and cannot be done with categorical stimuli as provided by the MST. One cannot mix the picture of a four-leaf clover with the picture of an oil can to get a picture of something that is somewhere between a four-leaf clover and an oilcan and reliably identifiable as such.
Sensory memory is usually reported in two variants. For instance, Cowan [48] describes short and long auditory stores, with short auditory stores keeping information for up to 300 ms, and long auditory stores retaining auditory information for at least several seconds. Please note that the lifetime of the so-called long stores corresponds to the lifetime of classic short-term memory. There is, however, ample evidence that sensory information may be stored for even longer periods, paralleling the classic findings for long-term memory (for a review see Ref. 49). In other words, there is no reason not to use sensory stimuli in classic memory experiments; as to the new paradigms that came along with the concepts of pattern separation and completion, sensory stimuli offer the advantage of parametric control of similarity.
The Visual Sensory Memory Task (VSMT) assesses pattern separation and completion in a way similar to the MST, using sensory stimuli instead of depictions of objects. In addition, it comes with an analysis different from that presented by the authors of the MST: Instead of subtracting certain entries of the 3 × 3 response matrix, it is analyzed in terms of classic Gaussian Signal Detection Theory [50]. Gaussian SDT assumes that a monitored quantity—in our case familiarity—is distributed normally on a decision axis, with equal standard deviations for the different stimulus classes (foils, lures, repeats). Only the means differ, with repeats having the highest familiarity, and foils the lowest. Decisions for a specific response—in our case “old”, “similar”, or “new”—are made on the basis of criteria on this decision axis. The differences between the means of the distributions of stimulus classes with a certain familiarity (lures, repeats) against the stimulus class with the lowest familiarity (foils) are measures of the so-called sensitivity (d’). The possibility to quantify the physical correlation of lures with their respective targets allows to establish a psychophysical relationship between physical correlation on the one hand and psychological similarity expressed in d’ on the other hand.
Figure 6a shows two examples of the stimuli used in the VSMT, with the second example being a lure to the first one. The stimuli are composed from grayscale visual pink noise, i.e., visual noise that has a 1/f distribution of spatial frequencies. The use of pink noise was inspired by the finding that natural images show a 1/f distribution of spatial frequencies, and that cortical cells are tuned to exactly this distribution of spatial frequencies [51]. This pink noise is then subjected to a Gaussian envelope, smoothening out the grayscale variations of the noise at the borders of the stimuli and preserving them most prominently in the center of the stimuli. This measure should prevent memorizing strategies based on visual artifacts occurring at the borders of the stimuli.
Analyzing the lure and target sensitivities expressed as d’ values as a function of the physical lure-target correlation we found a psychophysical law which links the sensitivity to the quartic correlation of lure and target grayscale values: d’ ∝ rLT4, where rLT is the correlation of lure and target. Figure 6b illustrates this result for two different mixing algorithms, called blending and pixel substitution. Figure 6c shows exemplary behavioral data, with the lure correlations already chosen such that they are distributed equally-spaced on a quartic correlation axis in order to obtain equally-spaced sensitivities.
The process of pattern separation, i.e., the differentiation between similar stimuli, can be examined quantitatively when using sensory stimuli, based on familiarity judgements. However, in order to test pattern completion one needs an indicator of identification. This can be realized by having the participants name the stimuli: Completing a fragmentary pattern can be verified if the pattern is identified by telling its name.
So, we tested the ability of participants to learn names of the sensory stimuli and to correctly identify them immediately after learning, and after one week. Figure 7 shows the naming performance directly after the learning, and one week later. Naming performance is calculated for “hits”, i.e., trials, where participants recognized the stimuli correctly as targets, as well as for “misses”, i.e., trials, where participants claimed the (old) stimuli to be new but were nevertheless requested to guess a name. Obviously, the naming performance is better for hits than for misses, but it is still well above chance level for misses. Most importantly, naming performance does not decrease strongly after a week.
The latter data illustrate that sensory stimuli can be used to test the transition of short-term memory to long-term memory, which is the principal role of hippocampus in human memory. Devoid of the influence of higher-level semantic representations and with the additional advantage of the possibility to construct stimuli with any desired degree of similarity, they provide an excellent opportunity to directly compare neuromorphic models of the hippocampus with human behavioral performance.
4 Neuromorphic Investigation Pathways
In the last subsection we have shown the special importance of pattern completion and pattern separation for memory formation and their significance for higher cognitive functionalities of the brain. For the emulation of these functionalities within neuromorphic systems, the bridge to cellular learning paradigms is important. In particular, this requires reconstructing cellular learning and network architectures in a way that ensures global functionality through local building blocks. One possible approach will be presented in this subsection, with special use of memristive devices. Based on the Hebbian learning theory, we will show how this theory can be applied to memristive devices and how it can be used to construct network architectures that enable the emulation of hippocampal learning forums.
4.1 Hebbian Learning
As early as 1949, the psychologist Donald Hebb suspected a temporal relationship between local neuronal activity and the change in synaptic connectivity there. Today, this relationship is known as Hebb's learning rule and it reads as follows [52]: “When an axon of cell A is near enough to excite cell B or repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.” Now often and somewhat more simply stated this sentence says: “neurons that fire together wire together.” Thus, the temporal correlation of signals determines the strengthening of synaptic connections. In the original sense, this only includes learning processes, whereby a similar rule also applies to unlearning. An extension made to this comes from Stent in 1973 and reads as: “Neural connections weaken when they are inactive at the same time that the postsynaptic neuron is active, or they will weaken if they are active, but consistently fail to cause the postsynaptic cell to fire an action potential” [53].
While Hebb's learning rule gives an intuitive approach to cellular learning mechanisms, a mathematical approach is needed for a formal quantitative model description. Following Ref. [54], this can be obtained via the characteristics of synaptic plasticity. As mentioned above, a fundamental feature of synaptic plasticity is input specificity. This means that the change in synaptic weight \({\omega }_{ij}\) between pre-neuron \(j\) and post-neuron \(i\) depends only on local variables and thus only on information available at the synapse site. This can be formulated mathematically as follows [54]:
Here the pre- and post-synaptic activities \({A}_{j}\) or \({A}_{i}\) are given by the voltage dependent function \({A}_{i(j)}=g\left({u}_{i\left(j\right)}\right)\), where \(F({\omega }_{ij},{A}_{j},{A}_{i})\) is a function dependent on the learning process, which we will specify in more detail next. An important property of cellular learning is cooperativity. This means that the synaptic weight changes when neurons are active simultaneously. This allows a simple ansatz for the function \(F({\omega }_{ij},{A}_{j},{A}_{i})\):
,
where α is a positive constant (α > 0) named as learning rate. However, it is useful to make α weight-dependent, so that one excludes unlimited weight growth:
Since \(\gamma\) is a positive constant and if \({\omega }_{ij}\) is normalized between zero and one, a weight saturation is therewith obtained. The third important property of cellular learning is associativity. This allows local learning rules to be transferred from cellular level to a multidimensional network level [54]. Furthermore, competition considers the limitation of shared synaptic resources, which leads to the fact that weights can only grow at the expense of other synaptic weights. One commonly used and simple possibility to implement competition is an adaptive threshold voltage (or sometimes referred to as a sliding threshold) for post-neuron activation [54, 55].
4.2 Memristive Hebbian Learning
Neuromorphic electronics is essentially concerned with the implementation of biological information processing within electrical circuits and systems [1]. Therefore, the emulation of synaptic plasticity via electronic devices is an important aspect for the hardware realization of neural circuits. In this context, memristive devices have shown their potential in recent years and are increasingly used as substitutes for synapses in artificial neural networks [2]. In the following, we will show how they can be described using Hebbian learning theory [56]. First, however, a short formal presentation of the concept of memristive device is given.
In their simplest form, memristive devices (also known as memristors) consist of a metal insulator metal structure as sketched in Fig. 8a. The insulator is the memristive layer which changes its resistance when an electrical voltage is applied. Thus, this device has a memory effect for electrical signals, which also explains the name as memory resistor. The idea of memristive devices goes back to Leon Chua, who first formally described the class of electronic devices within a mathematical theory [57]. Following Chua, the current–voltage characteristics (I-V curve) of memristive devices can be described by the following set of differential equations:
where x is called a state variable and ranges between zero and one, while f(x,V,t) is a function describing the state dynamics under external voltage stimuli. The conductance \(G\left(x,V,t\right)\) is named as memductance and can be linked to the state variable via [58]
Here \({G}_{on}\) and \({G}_{off}\) are the maximum and minimum conductance of the device, respectively. In 2007, scientists at HP labs made a connection between this theory, developed by Chua in the 1970s, and the resistive switching properties of resistive materials [59]. In this class of materials, a change in the local atomic configuration results in a change in the resistance of the devices [2]. Thus, the function f(x,V,t) describes the voltage-driven atomic reorganization within the memristive layer [59].
The emulation of synaptic plasticity using memristive devices is shown in Fig. 8b. When a high electrical voltage is applied, the atomic configuration within the memristive layer changes. Shown here is the injection of metal atoms into the insulating memristive layer building a metallic bridge between the two electrodes of the device. This leads to a permanent change in the resistance of the device and can emulate long-term potentiation (LTP). Furthermore, by applying a sufficiently negative voltage, the metallic bridge can be broken. This increases the resistance of the memristive device again, which can be identified as a long-term depression (LTD). A special property of memristive devices is that they also have gradual resistance changes, which can be used to simulate Hebb's learning. Thus, using the Hebbian learning theory and the Memristor equation, the following relationship can be identified:
Thus, synaptic plasticity can be simulated via the voltage-driven resistance switching mechanism in memristive devices. Here, \(f\left({x}_{ij},g\left({u}_{ij}\right),t\right)\) represents a voltage-dependent function that describes the voltage curve across the memristive device. It was shown that for memristive devices this equation can be identified with the logistic equation [60]:
It is important to note that this equation has a state and voltage dependent learning rate \(\beta\) and therefore differs from the normal logistic regression. This takes account of the special properties of memristive devices, whose functional mechanisms are usually based on ionic processes and have threshold properties. Thus, a change of the resistance of the device depends on the duration and amplitude of a voltage stimulus, as well as on the current resistance state [60]. Although the specific form of \(\beta\) depends on the type of the memristive device and its physical properties, some important statements can already be made at an abstract (descriptive) level: (i) in the simplest case, the voltage function \(g\left({u}_{ij}\right)\) can be represented by the voltage drop across the memristive device, i.e. at constant potential at the one electrode via the voltage \({u}_{j}\) at the opposite electrode. (ii) Following reference [60], the learning rate is linearly dependent on the state variable (\(\beta =\gamma {x}_{ij}{u}_{j}\)), so that the memristive state change for the emulation of synaptic learning processes in the framework of the Hebbian learning theory can be summarized as.
Thus, we have a compact model for a memristive device within the framework of Hebbian learning theory. Figure 9a shows the I-V curve obtained for sweeping the voltage \({u}_{j}\) between 1 V and −1 V (see arrows) with that model. Therein, a state-dependent learning rate \(\beta \left({u}_{j}\right)\) (red curve) is compared with a state-independent learning rate \(\beta \left({x}_{ij},{u}_{j}\right)\) (black curve). What can be seen is that by considering the state variables in the learning rate, an asymmetry between the set process (at positive voltage) and the reset process (at negative voltage) is obtained. In addition, the state variable dependent characteristic leads to a stronger non-linearity. For better visualization, the change of the state variable \(x\) is depicted in Fig. 9b and the function \(f\) is plotted in Fig. 9c. One sees a clearly stronger non-linear behavior of a state dependent learning rate (black curves) over the time course of the voltage change, which agrees with the threshold behavior of real memristive devices. It is shown in [4] how this model can be adapted to real device characteristics by choosing suitable representations for \(\beta \left({x}_{ij},g\left({u}_{ij}\right),t\right)\). In the following, however, we will discuss the biological plausibility of the model in more detail and show how it can be used to reproduce synaptic plasticity.
4.3 Emulation of Synaptic Learning
To show the biological plausibility of the memristive Hebbian learning model, we will follow reference [6] and first consider the emulation of LTP and LTD. For this purpose, the voltage clamp method was emulated first and compared with physiological data obtained from hippocampal CA1 neurons. The results obtained by the memristive learning model are shown in Fig. 10. For this purpose, the post-synaptic potential was used as a constant voltage offset (\({u}_{j}={V}_{post}\)), while a voltage pulse train was applied to the pre-neuron side (cf. Figure 10a). As sketched in Fig. 10b, this causes the resistance state of the memristive device to be set or reset depending on the post-synaptic offset \({V}_{post}\). Here, \({u}_{set}\) is to symbolize the critical voltage for changing the device state. Therewith, the post-synaptic potential determines whether the resistance of the memristive cell is increased (emulation of LTD) or decreased (emulation of LTD) in a good agreement with physiological data of [61], as shown in Fig. 9c.
An important property of synaptic learning is the temporal correlation between neuronal activities on the pre-synaptic side in relation to the post-synaptic side. In this context, spiking neural networks (SNNs) have been shown to mimic information processing in biological neural networks very well. A central component of information coding in SNN is provided by the neurons of the network, whose activity is coded in spike patterns (voltage pulse trains). The strength of a stimulus is translated into a number of spikes. Therefore, the all-or-nothing principle is used, which means that if the input stimulus (which can be represented by a current \({I}_{in}\)) depolarizes the post-synaptic potential \({u}_{j}\) to such an extent that it exceeds a critical threshold \({\theta }_{thres}\), a short voltage pulse (spike) is generated. Depending on the stimulus strength, more or fewer spikes are generated. Thus, a high activity of the neuron can be identified with a high number of spikes, i.e. with a higher frequency than in comparison to a weak input current, which generates only a few spikes. Mathematically, this can be represented quite elegantly within the framework of the leaky-integrate-and-fire neuron model which is generally given by the following equation:
Here \(F\left({u}_{j}\right)\) is a function that describes the voltage integration and must be defined in more detail and \(\delta\) is a constant. For the determination of \(F\left({u}_{j}\right)\) there are several possibilities, whereby [54] gives a good overview. In the context of the consideration made here we use the quadratic leaky-integrate-and-fire (QIF) neuron model specified by the following expression:
,
where \({g}_{L}\) is a constant that has the dimension conductivity per voltage, \(C\) is the membrane capacitance, \({u}_{r}\) is the resting potential, and \({u}_{c}\) is the threshold potential for self-induced spiking of the neuron. A schematic representation of the generated voltage curve \({u}_{j}(t)\) of the QIF neuron model is shown in Fig. 11a. Here, a spike signal from neuron A or B (orange or black) is always transmitted to the network as soon as \({u}_{j}\) exceeds the threshold value \({\theta }_{thres}\). In the model used in reference 6, the neuron model was also extended in such a way that when one of the neurons in the small network of two neurons sketched in Fig. 11a spikes, the current value of the respective other neuron is used for the offset potential \({V}_{post}\). Then the synaptic weight can be increased (LTP) or decreased (LTD) from the respective value \({u}_{j}\left(t\right)\) of the other neuron.
As already mentioned, the temporal order of the spikes is of relevance for SNN. An important learning schema in this context is spike-timing-dependent plasticity (STDP), which is sketched in Fig. 11b. STDP is an asymmetric form of Hebbian learning and allows causality in undirected neuronal networks. It specifies the strengthening of the synaptic connection (potentiation) if the pre-neuron is active before the post-neuron and a reduction of the synaptic connection strength (depression) if the post-neuron was active before the pre-neuron (cf. Fig. 11b). If neuron A is chosen as the pre-neuron and neuron B as the post-neuron, SDTP can be simulated in this network using the memristive Hebbian learning rule as shown in Fig. 11b. Here, the parameters of the model were adjusted to show good agreement with experimental data of Bi and Poo [61]. However, the temporal ordering of STDP, which leads to unidirectional connectivity in networks, is not valid for the complete frequency range. For spike patterns at frequencies above 40 Hz, the temporal order of spikes from pre- and post-synaptic neurons is cancelled as shown in Fig. 11c for the example of cortical pyramidal neurons (red and blue data points) [62]. To reproduce this behavior in the model, a fixed time delay of \(\pm 10\) ms between the spike trains (A and B) was chosen, so that the individual spike pairings are constant with respect to each other. Hereby two cases were considered in reference [6]: (i) the resistance state is constant between two following voltage pulses and (ii) the resistance value increases between those pulses as a result of a continuous reset mechanism typically found in ionic memristive devices [4]. As a result, Fig. 11c (red and blue curves) shows that within the model the transition between asymmetric and symmetric behavior can be well emulated. The latter is especially relevant for the formation of bidirectional connections in networks as we will see in the next chapter. Furthermore, it can be stated that a low and autonomous reset of the device resistant state leads to an improved match at low frequencies of less than 10 Hz.
4.4 Emulation of Network Dependent Learning Schemes
Learning and memory formation in biological networks is closely linked to network topology [63]. Thus, it is the connectivity between individual neurons that determines how a network responds to external stimuli and thus how it processes the information. While in machine learning the topology is fixed in the form of neuronal layers and the connectivity between the layers is determined by training processes, in biological neural networks a large part of the topology is formed by the learning process itself. In this process, connections are formed between networks that did not exist before and thus the network grows with its task.
Emulating growing networks within electronic circuits is difficult to realize in hardware, but all-to-all connecting networks, that are, networks in which each neuron is connected to every other neuron provide a simple opportunity to come a step closer to study the development of the topology of biological networks. A good example of this are the DG and CA3 regions of the hippocampus, which act as biological role models of the mentioned essential tasks pattern completion and pattern separation. This is considered to be the basis of associative memory in particular and is characterized by incoming information being stored sequentially and independently of each other. Thus, stored information can be retrieved and restored even if it was only partially presented. In the network topology of the hippocampus, recurrent connections in the DG and CA3 region could be identified for this purpose, forming an auto-associative network. This can be well modeled by an all-to-all network [6].
The memristive network model of Ref. [6] which emulates those two key functionalities, i.e. sequential learning and pattern completion, in form of an all-to-all network is shown in Fig. 12. Therefore, the memristive learning model was used and external information was presented to the network in two different ways: (i) by applying the individual pixels of visual patterns in parallel and (ii) by applying the individual pixels of a visual pattern sequentially. A schematic representation for better explanation is shown in Fig. 12a. The obtained connectivity matrices for these two cases are shown for pattern completion and sequential learning in Fig. 12b and c, respectively. For pattern completion, it was shown that using the memristive learning model, there is an increase in bidirectional connectivity at higher frequencies, which can be explained by the behavior shown in Fig. 12c. A unidirectional connectivity pattern is the result in the case of a sequential pattern presentation during the learning process (see Fig. 12c).
This allows the topology of the network to be shaped depending on the presentation of the input data. The latter is particularly important for learning in biological networks, since it allows for experience-dependent learning, i.e., depending on how information is presented (experiences have been made), the topology of the network and thus the functionality of the neural network changes. Thus, memories can shape the performance of the system. This is an important feature related to episodic memory, in which events are linked by their temporal sequence.
Notes
- 1.
CA: Cornu Ammonis.
- 2.
Schaffer collaterals.
References
Mead, C., Ismail, M.: (eds.): Analog VLSI Implementation of Neural Systems, vol. 80. Springer Science & Business Media (1989)
Ielmini, D., Ambrogio, S.: Emerging neuromorphic devices. Nanotechnology 31(9), 092001 (2019)
Adam, G.C., Khiat, A., Prodromakis, T.: Challenges hindering memristive neuromorphic hardware from going mainstream. Nat. Commun. 9, 5267 (2018)
Hansen, M., Zahari, F., Ziegler, M., Kohlstedt, H.: Double-Barrier memristive devices for unsupervised learning and pattern recognition. Front. Neurosci. 11 (2017)
Waser, R., Wuttig, M.: Memristive Phenomena - from fundamental physics to neuromorphic computing: Lecture Notes: Spring School organized by Peter Grünberg Institute, Forschungszentrum Jülich and Physics Institutes, RWTH Aachen University, Jülich Aachen Research Alliance, Section Fundamentals of Future Information Technology (JARA-FIT): in collaboration with universities, research institutes and industry (Forschungszentrum Jülich, Zentralbibliothek) (2016)
Diederich, N., Bartsch, T., Kohlstedt, H., Ziegler, M.: A memristive plasticity model of voltage-based STDP suitable for recurrent bidirectional neural networks in the hippocampus. Sci. Rep. 8, 9367 (2018)
Kandel, E.C., Schwartz J.H., Jessell T.M.: Principles of Neuronal Science, 3rd edn. Elsevier Science Publishing, Amsterdam (1991)
Andersen, P., Morris, R., Amaral, D., Bliss, T., O’Keefe, J., (eds.): The Hippocampus Book. Oxford University Press (2006)
Rolls, E.T.: The mechanisms for pattern completion and pattern separation in the hippocampus. Front Syst. Neurosci. 7, 74 (2013)
Bliss, T., Schoepfer, R.: Neuroscience. Controlling the ups and downs of synaptic strength, Science 304, 973 (2004)
Whitlock, J.R., Heynen, A.J., Shuler, M.G., Bear, M.F.: Learning induces long-term potentiation in the hippocampus. Science 313, 1093 (2006)
Bliss, T., Collingridge, G., Morris, R.: Long-Term Potentiation Enhancing Neuroscience for 30 Years. Oxford University Press, New York (2004)
Larson, J., Lynch, G.: Role of N-methyl-D-aspartate receptors in the induction of synaptic potentiation by burst stimulation patterned after the hippocampal 0-rhythm. Brain Res. 441, 111 (1988)
Pastalkova, E., Serrano, P., Pinkhasova, D., Wallace, E., Fenton, A.A., Sacktor, T.C.: Storage of spatial information by the maintenance mechanism of LTP. Science 313, 1141 (2006)
Bartsch, T., Schonfeld, R., Muller, F.J., Alfke, K., Leplow, B., Aldenhoff, J., Deuschl, G., Koch, J.M.: Focal lesions of human hippocampal CA1 neurons in transient global amnesia impair place memory. Science 328, 1412–1415 (2010)
Squire, L.R., Zola, S.M.: Structure and function of declarative and nondeclarative memory systems. Proc. Natl. Acad. Sci. U. S. A. 93, 13515–13522 (1996)
McClelland, J.L., McNaughton, B.L., O’Reilly, R.C.: Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory. Psychol. Rev. 102, 419–457 (1995)
Scoville, W.B., Milner, B.: Loss of recent memory after bilateral hippocampal lesions. J. Neurol. Neurosurg. Psychiatry 20, 11–21 (1957)
O’Reilly, R.C., McClelland, J.L.: Hippocampal conjunctive encoding, storage, and recall: avoiding a trade-off. Hippocampus 4, 661–682 (1994)
Marr, D.: Simple memory: a theory for archicortex. Philos. Trans. R. Soc. B Biol. Sci. 262, 23 (1971)
Yassa, M.A., Stark, C.E.: Pattern separation in the hippocampus. Trends Neurosci 34(10), 515–525 (2011)
Bakker, C.B., Kirwan, M.M., Stark, C.E.: Pattern separation in the human hippocampal CA3 and dentate gyrus. Science 319, 1640 (2008)
Lisman, J.E.: Relating hippocampal circuitry to function: recall of memory sequences by reciprocal dentate-CA3 interactions. Neuron 22, 233 (1999)
Leutgeb, J.K., Leutgeb, S., Moser, M.-B., Moser, E.I.: Pattern separation in the dentate gyrus and CA3 of the Hippocampus. Science 315, 961 (2007)
Berron, D., Schütze, H., Maass, A., Cardenas-Blanco, A., Kuijf, H.J., Kumaran, D., Düzel, E.: Strong evidence for pattern separation in human dentate gyrus. J. Neurosci. 36, 7569–7579 (2016)
Strong, M.H., Strong, E.K.: The nature of recognition memory and of the localization of recognitions. Am. J. Psychol. 27(3), 341–362 (1916)
Stark, S.M., Stevenson, R., Wu, C., Rutledge, S., Stark, C.E.L.: Stability of age-related deficits in the mnemonic similarity task across task variations. Behav. Neurosci. 129(3), 257–68 (2015)
Bartsch, T., Wulff, P.: The hippocampus in aging and disease: from plasticity to vulnerability. Neuroscience 309, 1–16 (2015)
Small, S.A., Schobel, S.A., Buxton, R.B., Witter, M.P., Barnes, C.A.: A pathophysiological framework of hippocampal dysfunction in ageing and disease. Nat Rev Neurosci 12, 585–601 (2011)
Bartsch, T., Alfke, K., Deuschl, G., Jansen, O.: Evolution of hippocampal CA-1 diffusion lesions in transient global amnesia. Ann. Neurol. 62, 475–480 (2007)
Bartsch, T., Alfke, K., Stingele, R., Rohr, A., Freitag-Wolf, S., Jansen, O., Deuschl, G.: Selective affection of hippocampal CA-1 neurons in patients with transient global amnesia without long-term sequelae. Brain 129, 2874–2884 (2006)
Bettcher, B.M., et al.: More than memory impairment in voltage-gated potassium channel complex encephalopathy. Eur. J. Neurol. 21, 1301–1310 (2014)
Butler, C.R., Miller, T.D., Kaur, M., Baker, I.W.S., Boothroyd, G.D., Illman, N.A., Rosenthal, C.R., Vincent, A., Buckley, C.J.: Persistent anterograde amnesia following limbic encephalitis associated with antibodies to the voltage-gated potassium channel complex. J. Neurol. Neurosurg. Psychiatry 85, 387–391 (2014)
Irani, S.R., Michell, A.W., Lang, B., Pettingill, P., Waters, P., Johnson, M.R., Schott, J.M., Armstrong, R.J.E., S Zagami, A., Bleasel, A.F., Somerville, E.R., Smith, S.M.J., Vincent, A.: Faciobrachial dystonic seizures precede Lgi1 antibody limbic encephalitis. Ann. Neurol. 69 (2011)
Malter, M.P., Frisch, C., Schoene-Bake, J.-C., Helmstaedter, C., Wandinger, K.-P., Stoecker, W., Urbach, H., Surges, R., Elger, C.E., Vincent, A., Bien, C.G.: Outcome of limbic encephalitis with VGKC-complex antibodies. Relation to antigenic specificity. J. Neurol. 261, 1695–1705 (2014)
Finke, C., Prüss, H., Heine, J., Reuter, S., Kopp, U.A., Wegner, F., Then Bergh, F., Koch, S., Jansen, O., Münte, T., Deuschl, G., Ruprecht, K., Stöcker, W., Wandinger, K.-P., Paul, F., Bartsch, T.: Evaluation of cognitive deficits and structural hippocampal damage in encephalitis with leucine-rich, glioma-inactivated 1 antibodies. JAMA Neurol. 74, 5059 (2017)
Miller, T.D., Chong, T.T., Aimola Davies, A.M., Ng, T.W.C., Johnson, M.R., Irani, S.R., Vincent, A., Husain, M., Jacob, S., Maddison, P., Kennard, C., Gowland, P.A., Rosenthal, C.R.: Focal CA3 hippocampal subfield atrophy following LGI1 VGKC-complex antibody limbic encephalitis. Brain 140, 1212–1219 (2017)
Hanert, A., Pedersen, A., Bartsch, T.: Transient hippocampal CA1 lesions in humans impair pattern separation performance. Hippocampus (2019)
Knierim, J.J., Neunuebel, J.P.: Tracking the flow of hippocampal computation: pattern separation, pattern completion, and attractor dynamics. Neurobiol. Learn. Mem. 129, 38–49 (2016)
O’Reilly, R.C., Rudy, J.W.: Computational principles of learning in the neocortex and hippocampus. Hippocampus 10(4), 389–397 (2000)
Insausti, R., Amaral, D.G.: Hippocampal formation. In: Paxinos, G., (ed.), The Human Nervous System, pp. 871–914. Elsevier (2004)
Hasselmo, M.E., Eichenbaum, H.: Hippocampal mechanisms for the context-dependent retrieval of episodes. Neural Netw. 18, 1172–1190 (2005)
Hanert, A., Rave, J., Granert, O., Ziegler, M., Pedersen, A., Born, J., Finke, C., Bartsch, T.: Hippocampal dentate gyrus atrophy predicts pattern separation impairment in patients with LGI1 encephalitis. Neuroscience 400, 120–131 (2019)
Dillon, S., Tsivos, D., Knight, M.J., McCann, B., Pennington, C.M., Shiel, A.I., Conway, M.E., Newson, M.A., Kauppinen, R.A., Coulthard, E.J.: The impact of ageing reveals distinct roles for human dentate gyrus and CA3 in pattern separation and object recognition memory. Sci. Rep. 7 (2017)
Doxey, C.R., Kirwan, C.B.: Structural and functional correlates of behavioral pattern separation in the hippocampus and medial temporal lobe. Hippocampus 25, 524–533 (2015)
Craik, F.I.M., Tulving, E.: Depths of processing and the retention of words in episodic memory. J. Exp. Psychol. Gen. 104, 268–294 (1975)
Kaernbach, C.: The memory of noise. Exp. Psychol. 51(4), 240–248 (2004)
Cowan, N.: On short and long auditory stores. Psychol. Bull. 96, 341–370 (1984)
Winkler, I., Cowan, N.: From sensory to long-term memory: evidence from auditory memory reactivation studies. Exper. Psychol. 52(1), 3–20 (2005)
Green, D.M., Swets, J.A.: Signal Detection Theory and Psychophysics. John Wiley, New York (1966)
Field, D.J.: Relations between the statistics of natural images and the response properties of cortical cells. J. Opt. Soc. Am. A 4(12), 2379–2394 (1987)
Hebb, D.O.: The Organization of Behavior: A Neuropsychological Theory, vol. 11. [print.]. Wiley, New York (1974)
Stent, G.S.: A physiological mechanism for Hebb’s postulate of learning. Proc. Natl. Acad. Sci. 70(4), 997 (1973). https://doi.org/10.1073/pnas.70.4.997
Gerstner, W., Kistler, W.M., Naud, R., Paninski, L.: Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition. Cambridge University Press, Cambridge (2014)
Bienenstock, E.L., Cooper, L.N., Munro, P.W.: Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex. J. Neurosci. 2(1), 32 (1982). https://doi.org/10.1523/JNEUROSCI.02-01-00032.1982
Ziegler, M., Wenger, C., Chicca, E., Kohlstedt, H.: Tutorial: concepts for closely mimicking biological learning with memristive devices: principles to emulate cellular forms of learning. J. Appl. Phys. 124(15), 152003 (2018). https://doi.org/10.1063/1.5042040
Chua, L.: Memristor-the missing circuit element. IEEE Trans. Circuit Theory 18(5), 507–519 (1971). https://doi.org/10.1109/TCT.1971.1083337
Yang, J.J., Strukov, D.B., Stewart, D.R.: memristive devices for computing. Nat. Nanotechnol. 8(December), 13 (2012)
Strukov, D.B., Snider, G.S., Stewart, D.R., Williams, R.S.: The missing memristor found. Nature 453(7191), 80–83 (2008)
Ziegler, M.C., Riggert, M., Hansen, T.B., Kohlstedt, H.: Memristive Hebbian plasticity model: device requirements for the emulation of Hebbian plasticity based on memristive devices. IEEE Trans. Biomed. Circuits Syst. 9(2), 197–206 (2015). https://doi.org/10.1109/TBCAS.2015.2410811
Bi, G., Poo, M.: Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type. J. Neurosci. 18, 10464–10472 (1998)
Sjöström, P.J., Turrigiano, G.G., Nelson, S.B.: Rate, timing, and cooperativity jointly determine cortical synaptic plasticity. Neuron 32, 1149–1164 (2001). https://doi.org/10.1016/S0896-6273(01)00542-6
Fornito, A., Zalesky, A., Bullmore, E.T.: Fundamentals of Brain Network Analysis. Academic, Amsterdam (2016)
Buzsáki, G.: Rhythms of the Brain. Oxford University Press (2006)
Ziegler, M., Kohlstedt, H.: Memristive models for the emulation of biological learning. In: Memristor Computing Systems, pp. 247–272. Springer International Publishing, Cham (2022)
Acknowledgements
The authors acknowledge financial support via the Deutsche Forschungsgemeinschaft (DFG) by the Research Unit 2093: memristive devices for neuronal systems.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2024 The Author(s)
About this chapter
Cite this chapter
Kaernbach, C., Bartsch, T., Brütt, M., Hanert, A., Diederich, N., Ziegler, M. (2024). Emulation of Learning Behavior in the Hippocampus: From Memristive Learning to Behavioral Tests. In: Ziegler, M., Mussenbrock, T., Kohlstedt, H. (eds) Bio-Inspired Information Pathways. Springer Series on Bio- and Neurosystems, vol 16. Springer, Cham. https://doi.org/10.1007/978-3-031-36705-2_17
Download citation
DOI: https://doi.org/10.1007/978-3-031-36705-2_17
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-36704-5
Online ISBN: 978-3-031-36705-2
eBook Packages: EngineeringEngineering (R0)