Next Article in Journal
Hierarchical NiMn-LDH Hollow Spheres as a Promising Pseudocapacitive Electrode for Supercapacitor Application
Next Article in Special Issue
Infrared UAV Target Detection Based on Continuous-Coupled Neural Network
Previous Article in Journal
Boosting the Electrostatic MEMS Converter Output Power by Applying Three Effective Performance-Enhancing Techniques
Previous Article in Special Issue
RC Bridge Oscillation Memristor Chaotic Circuit for Electrical and Electronic Technology Extended Simulation Experiment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Beyond Memristors: Neuromorphic Computing Using Meminductors

by
Frank Zhigang Wang
Division of Computing, Engineering & Mathematics, University of Kent, Canterbury CT2 7NZ, UK
Micromachines 2023, 14(2), 486; https://doi.org/10.3390/mi14020486
Submission received: 22 January 2023 / Revised: 6 February 2023 / Accepted: 16 February 2023 / Published: 19 February 2023

Abstract

:
Resistors with memory (memristors), inductors with memory (meminductors) and capacitors with memory (memcapacitors) play different roles in novel computing architectures. We found that a coil with a magnetic core is an inductor with memory (meminductor) in terms of its inductance L(q) being a function of charge q. The history of the current passing through the coil is remembered by the magnetization inside the magnetic core. Such a meminductor can play a unique role (that cannot be played by a memristor) in neuromorphic computing, deep learning and brain-inspired computers since the time constant ( t 0 = L C ) of a neuromorphic RLC circuit is jointly determined by the inductance L and capacitance C , rather than the resistance R . As an experimental verification, this newly invented meminductor was used to reproduce the observed biological behavior of amoebae (the memorizing, timing and anticipating mechanisms). In conclusion, a beyond-memristor computing paradigm is theoretically sensible and experimentally practical.

1. Introduction

A memristor is an ideal candidate for non-Turing machines due to its compact processing-in-memory architecture. As a sister of the memristor (resistor-with-memory), inductor-with-memory (meminductor) has a unique role to play in neuromorphic computing systems, novel computing architectures and dynamical neural networks.
An inductor, typically consisting of an insulated wire wound into a coil, stores energy in a magnetic flux φ surrounding it when a current i flows through it. When the current changes, the time-varying magnetic flux induces a voltage across the coil, described by Faraday’s law [1]. Such an inductor is characterized by its inductance L = φ i . In SI, the unit of inductance is the henry (H). As shown in Figure 1, by adding a magnetic core made of a ferromagnetic material, such as iron, inside the coil, the magnetizing flux from the coil induces magnetization in the material, increasing the magnetic flux. The high permeability of a ferromagnetic core can increase the inductance of a coil by a factor of several thousand over what it would be without it [1].
Organisms such as amoebae exhibit primitive learning and the memorizing, timing and anticipating mechanisms. Their adaptive behavior was emulated by a memristor-based RLC circuit [2]. Motivated by this work, we will design a meminductor-based neuromorphic architecture that self-adjusts its inherent resonant frequency in a natural way following the external stimuli frequency. In contrast to the previous work, our innovation is that this architecture uses a unique meminductor to increment its time constant and subsequently decrement its resonant frequency to match the stimuli frequency. It is our intention to use this architecture to help better investigate the cellular origins of primitive intelligence. This is also the significance of this sort of research in terms of not only understanding the primitive learning but also developing a novel computing architecture.
In this article, we first prove that a coil structure with a magnetic core is, in fact, a meminductor, since its inductance is no longer a constant, and then experimentally verify this new device in neuromorphic computing.

2. LLG Model for the Coil Core Structure

Next, we produce a theory to physically describe the current–flux interaction in a conducting coil with a magnetic core. For the sake of convenience, the magnetic core is assumed to be a single-domain cylinder with uniaxial anisotropy in the approximate sense: the magnetization is uniform and rotates in unison [3]. In an ideal case, there is a negligible amount of eddy current damping and parasitic “capacitor” effect.
It was found that the rotational process dominates the fast reversal of square loop ferrites with a switching coefficient S w = 0.2   O e · μ s [4]. The rotational model for the coil core structure is shown in Figure 2.
The Landau–Lifshitz–Gilbert equation [5,6] is
( 1 + g 2 ) d M S ( t ) d t = | γ | [ M S ( t ) × H ] g | γ | M S [ M S ( t ) × ( M S ( t ) × H ) ]
where g is the damping factor and γ is the gyromagnetic ratio.
The first term of the right-hand side can be rewritten as | γ | M S ( t ) × H = | γ | ( M S s i n θ s i n ψ H i M S s i n θ c o s ψ H j ) . This term has no k component (along Z) and does not contribute to MZ.
The second term can be rewritten as
g | γ | M S [ M S ( t ) × ( M S ( t ) × H ) ]
= g | γ | M S ( M S s i n θ c o s ψ i + M S s i n θ s i n ψ j
+ M S c o s θ k ) × [ M S s i n θ s i n ψ H i M S s i n θ c o s ψ H j ]
= g | γ | M S [ M S s i n θ c o s ψ M S s i n θ c o s ψ H M S s i n θ s i n ψ M S s i n θ s i n ψ H ] k
= g | γ | M S H [ s i n 2 θ c o s 2 ψ + s i n 2 θ s i n 2 ψ ] k = g | γ | M S H s i n 2 θ k
= g | γ | M S H ( 1 c o s 2 θ ) k = g | γ | M S H [ 1 ( M Z M S ) 2 ] k
From the above, we can obtain the following equation:
( 1 + g 2 ) d M Z ( t ) d t = g | γ | M S H [ 1 ( M Z M S ) 2 ]
Assuming m ( t ) = M Z ( t ) M S , we can obtain
d m ( t ) d t = g | γ | H ( 1 + g 2 ) [ 1 m 2 ( t ) ] = 1 S W i ( t ) [ 1 m 2 ( t ) ]
The threshold for magnetization switching is automatically taken into account because the switching coefficient is defined based on the threshold field H0, which is one to two times the coercive force HC [3,7,8].
The hyperbolic function tanh has d d x tanh x = 1 t a n h 2 x and the derivative of a function of function has d u d x = d u d y d y d x ; therefore, it is reasonable to assume that
m ( t ) = tanh [ q ( t ) S W + C ] ,
where d d t q ( t ) = i ( t ) and C is a constant of integration, such that C = t a n h 1 m 0 if q(t = 0) = 0 (assuming the charge does not accumulate at any point) and m0 is the initial value of m.
dMz/dt can be observed by the voltage v(t) induced:
μ 0 S d M z d t = S d B z d t = d φ z d t = v ( t )
where μ0 is the permeability and S is the cross-sectional area.
Equation (4) results in
φ = μ 0 S M + C = μ 0 S M S m + C
where C is another constant of integration.
Combining Equation (3) and Equation (5) and assuming φ ( t = 0 ) = 0 , we have C = μ 0 S M S m 0 , so
φ = μ 0 S M s [ tanh ( q S W + t a n h 1 m 0 ) m 0 ] .
Beyond the first-order setting, a second-order circuit element, such as a meminductor, requires double-time integrals of voltage and current, namely, σ = q d t = i d t and ρ = φ d t = v d t . With the use of these additional variables [8,9], we accommodate a meminductor, a memcapacitor and other second-order circuit elements with memory. By integrating Equation (6), we have
ρ = τ = t φ d τ = μ 0 S M s τ = t [ tanh ( q S W + t a n h 1 m 0 ) m 0 ] d τ .
Since tanh x d x = ln ( cosh x ) + C , we have
ρ = μ 0 S M s   l n { cosh [ tanh ( q S W + t a n h 1 m 0 ) m 0 ] } + C ρ ^ ( q ) .
Therefore, we have
L = φ i = μ 0 S M s [ tanh ( q S W + t a n h 1 m 0 ) m 0 ] d q d t L ( q )
where the denominator is still a function of the charge q = q ^ ( t ) since d q d t = i ( t ) = i [ q ^ 1 ( q ) ] .
Based on Equation (8), a typical ρ q curve is depicted in Figure 3 with m0 = −0.964 (this value reflects the intrinsic fluctuation; otherwise, M reverts to the stable equilibria m 0 = ± 1 ).

3. Experimental Verification of the Rotational Model

To verify the validity/accuracy of the above rotational model, Equation (3) with H ( t ) i ( t ) is used to reproduce various m–H loops in Figure 4.
As a comparison, a typical m–H loop of real-world magnetic materials is displayed in Figure 5. The above simulations clearly validate Cushman’s conclusion that “the rotational model is applicable to the driving current of an arbitrary waveform” [3].
As another comparison, a simulated loop based on m = tan h ( A ( H ± H C ) ) ] is displayed in Figure 6. The equivalence of formula m = tan h ( A ( H ± H C ) ) ] and formula m ( t ) = t a n h [ 1 S W ( q ( t ) ± S W t a n h 1 | m 0 | ) ] indicates that the rotational model is good enough to reproduce a sine-wave response.

4. Simulations and Experiments of a Coil Core Meminductor for Neuromorphic Computing

Nature exhibits unconventional ways of processing information. Taking amoebae as an example, they display memorizing, timing and anticipating mechanisms, which may represent the origins of primitive learning. A circuit element with memory can be used to mimic these behaviors in terms of being plastic according to the dynamic history [13,14,15].
As shown in Figure 7, a simple RLC neuromorphic circuit using a coil core meminductor, L(q), is designed. The temperature controlling the motion of an amoeba is analogous to the input voltage, Vin, whereas the output voltage, Vout, is analogous to the locomotive speed of the amoeba.
With the progress of time, the circuit’s resonance frequency automatically scans the following frequency range:
f 0 = 1 2 π L ( q ) C = 1 2 π L ( i ( t ) d t ) C
When the ramping circuit resonance frequency, f0, hits the (temperature) stimulus frequency, fsti, at a time point, a resonance is triggered.
This neuromorphic circuit in Figure 7 using a coil core meminductor reasonably reproduces a behavior that was observed on amoebae: in response to the input stimulus pulses (representing the temperature drops), the circuit reduces the amplitude of its output (representing the amoeba’s speed) at the corresponding time points. As demonstrated in Figure 8, long-lasting responses for spontaneous in-phase slow down (SPS) [13,14] are both simulated and tested experimentally: the amoeba being exposed to the three temperature drops slows down or even stops at the corresponding time points S1, S2 and S3. Remarkably, the amoeba is found to slow down even if the temperature drops do not occur at C1, C2 and C3 (that are naturally anticipated by the amoeba after the three consecutive drops are experienced at S1, S2 and S3).
The experimental setup of the neuromorphic circuit in Figure 8 is as follows: L [ q ( t ) ] = L [ i ( t ) d t ] starts at 2 H and then decreases by 20% after each stimulus pulse. The circuit’s resonance frequency, determined by the staircased L(q) (Figure 3), increases itself with the increased number of oncoming stimulus pulses. This simulation in Figure 8a agrees with our experiment in Figure 8b on a hardware emulator built with a dsPIC30F2011 microcontroller, an MCP4261 digital potentiometer and a differential 12-bit ADC converter [15].
This experiment vividly demonstrates amoebae’s three mechanisms: 1. the memorizing mechanism (the amoeba remembers the three temperature drops at S1, S2 and S3); 2. the timing mechanism (the amoeba slows down at the correct time points C1, C2 and C3 despite no temperature drops at these time points); and 3. the anticipating mechanism (the reason the amoeba slows down actively is because it anticipates the future possible drops at C1, C2 and C3 based on its memory of S1, S2 and S3 although these temperature drops at C1, C2 and C3 do not occur). Remarkably, these memorizing/timing/anticipating mechanisms are implemented by our newly invented coil core meminductor in terms of using the magnetization to remember the current history, adapting automatically the time constant determined by L(q) to the stimulus and triggering the resonance, respectively.
This neuromorphic circuit is a deep learning neural network [16] with multiple layers between the input and output layers, as shown in Figure 9. The meminductor L(q) and capacitor C store energy in the form of magnetic flux and electric field, respectively, whereas resistor R only consumes energy. Energy can be transferred from one form to the other, which is oscillatory with a resonance frequency ( f 0 = 1 2 π L ( q ) C ). The resistance R dampens the oscillation, diminishing it with time. Not strictly speaking, such a damped oscillation may be vividly approximated by e α t sin 2 π f 0 t , where α = R 2 L ( q ) is the damping factor.

5. Discussion and Conclusions

Memristors (resistors with memory), meminductors (inductors with memory) and memcapacitors (capacitors with memory) have different roles in neuromorphic computing systems, novel computing architectures and dynamical neural networks. In this study, we found that a coil with a magnetic core is, in fact, an inductor with memory (meminductor) in terms of its inductance being a function of the charge. This meminductor can play a unique role (that cannot be played by a memristor) in neuromorphic computing [17,18], deep learning [16] and brain-inspired computing [19,20,21] since the time constant ( t 0 = L C ) of a neuromorphic RLC circuit is jointly determined by the inductance L and capacitance C , rather than the resistance R . As an experimental verification, this new meminductor was used to reasonably reproduce the observed biological behavior of amoebae, in which the resonance frequency tracks the stimulus frequency. In conclusion, a beyond-memristor computing paradigm is theoretically sensible and experimentally practical.
Nature exhibits unconventional ways of storing and processing information, and circuit elements with memory mimic the dynamical behaviors of some biological systems in terms of being plastic according to the history of the systems. As a practical application, the Pavlovian experiment on conditioned reflex is reproduced by a memristor neural network with the aid of the so-called “delayed switching” effect [22,23]. In this application, the total length of the stimuli sequence, the frequency of the stimuli sequence and the spike width are carefully adjusted such that the time delay point of the memristor synapse should not be exceeded while only one neuron fires. In many applications, it is not feasible and possible to solve the problems with conventional computational models and methods (i.e., the Turing machine [24,25,26,27] and the von Neumann architecture [28,29,30,31]). As demonstrated above, neuromorphic architectures may help.
Understanding the brain with non-linear dynamics and extreme complexity is still a great challenge since the human brain has 1011 neurons and 1014 synapses (each neuron is connected to up to 20,000 synapses) [32,33,34,35,36]. By coincidence, as one of the simplest creatures or organisms existing on earth, unicellular amoebae display some mysterious brain-like behaviors in terms of controlling their actions [37,38,39,40,41]. Their memorizing, timing and anticipating mechanisms may represent the origins of primitive learning.
The evolution of life includes the process of evolving intelligence in charge of controlling and predicting their behavior. In 1952, Hodgkin and Huxley developed an equivalent circuit to explain the initiation/propagation of action potentials and the underlying ionic mechanisms in the squid giant axon [17,42,43,44,45,46]. They were awarded the Nobel Prize in Physiology or Medicine for this work in 1963. In the so-called Hodgkin–Huxley model, an electrical circuit representing each cell consists of a linear resistor, a capacitor, three batteries, and two unconventional elements identified by Hodgkin and Huxley as time-varying resistors. In 2012, these two potassium and sodium time-varying resistors were substituted by a potassium ion-channel memristor, and a sodium ion-channel memristor, respectively [18,19]. This presents great progress in neural physiology and brain science in over 70 years in terms of exploring the origins of primitive learning from an evolutionary perspective.
In this work, we developed a meminductor-based neuromorphic architecture that self-adjusts its inherent resonant frequency in a natural way following the external stimuli frequency. In contrast to the previous work, our innovation is that this architecture uses a unique meminductor to increment its time constant and subsequently decrement its resonant frequency to match the stimuli frequency. This architecture may help better investigate the cellular origins of primitive intelligence [47,48,49]. This sort of research is significant in terms of not only understanding the primitive learning but also developing a novel computing architecture, which will be much more integrated with our physical and social environment, capable of self-learning, as well as processing and distributing big data at an unprecedented scale [50,51]. This will require new designs, new theories, new paradigms and close interactions with application experts in the sense that new bio-inspired (neurosynaptic) and non-Turing-inspired computing platforms are moving away from traditional computer architecture design [51].

Author Contributions

Conceptualization, F.Z.W.; methodology, F.Z.W.; software, F.Z.W.; validation, F.Z.W.; formal analysis, F.Z.W.; investigation, F.Z.W.; resources, F.Z.W.; data curation, F.Z.W.; writing—original draft preparation, F.Z.W.; writing—review and editing, F.Z.W.; visualization, F.Z.W.; project administration, F.Z.W.; funding acquisition, F.Z.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by an EC grant “Re-discover a periodic table of elementary circuit elements”, PIIFGA2012332059, Marie Curie Fellow: Leon Chua (UC Berkeley), Scientist-in-charge: Frank Wang (University of Kent).

Data Availability Statement

Most of the data generated and analysed during this study are included in this published article. The additional data are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Alexander, C.; Sadiku, M. Fundamentals of Electric Circuits, 3rd ed.; McGraw-Hill: New York, NY, USA, 1994; p. 211. [Google Scholar]
  2. Pershin, Y.V.; La Fontaine, S.; Di Ventra, M. Memristive model of amoeba learning. Phys. Rev. E 2009, 80, 021926. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Donahue, M.J.; Porter, D.G. Analysis of switching in uniformly magnetized bodies. IEEE Trans. Magn. 2002, 38, 2468–2470. [Google Scholar] [CrossRef]
  4. Gyorgy, E.M. Rotational model of flux reversal in square loop ferritcs. J. Appl. Phys. 1957, 28, 1011–1015. [Google Scholar] [CrossRef]
  5. Landau, L.D.; Lifshitz, E.M. Theory of the dispersion of magnetic permeability in ferromagnetic bodies. Phys. Z. Sowjetunion. 1935, 8, 153. [Google Scholar]
  6. Gilbert, T.L. A Lagrangian formulation of the gyromagnetic equation of the magnetic field. Phys. Rev. 1955, 100, 1243. [Google Scholar]
  7. Cushman, N. Characterization of Magnetic Switch Cores. IRE Trans. Compon. Parts 1961, 8, 45–50. [Google Scholar] [CrossRef]
  8. Menyuk, N.; Goodenough, J. Magnetic materials for digital computer components I. J. Appl. Phys. 1955, 26, 8–18. [Google Scholar] [CrossRef]
  9. Riaza, R. Second order mem-circuits. Int. J. Circuit Theory Appl. 2015, 43, 1719–1742. Available online: https://mc.manuscriptcentral.com/ijcta (accessed on 8 January 2023). [CrossRef] [Green Version]
  10. Chua, L. Memristor—The Missing Circuit Element. IEEE Trans. Circuit Theory 1971, CT-18, 507–519. [Google Scholar] [CrossRef]
  11. Georgiou, P.S.; Barahona, M.; Yaliraki, S.N.; Drakakis, E.M. On memristor ideality and reciprocity. Microelectron. J. 2014, 45, 1363–1371. [Google Scholar] [CrossRef] [Green Version]
  12. Rudowicz, C.; Sung, H.W.F. Textbook treatments of the hysteresis loop for ferromagnets-Survey of misconceptions and misinterpretations. Am. J. Phys. 2003, 71, 1080. [Google Scholar]
  13. Pershin, Y.; Ventra, M.D. Experimental demonstration of associative memory with memristive neural networks. Nat. Preced. 2009. [Google Scholar] [CrossRef]
  14. Pershin, Y.; Ventra, M.D.; Chua, L. Circuit Elements with Memory: Memristors, Memcapacitors, and Meminductors. Proc. IEEE 2009, 97, 1717–1724. [Google Scholar]
  15. Wang, F.Z.; Chua, L.O.; Yang, X.; Helian, N.; Tetzlaff, R.; Schmidt, T.; Li, L.; Carrasco, J.M.; Chen, W.; Chu, D. Adaptive Neuromorphic Architecture (ANA). Neural Netw. 2013, 45, 111–116. [Google Scholar] [CrossRef] [PubMed]
  16. LeCun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  17. Hodgkin, A.; Huxley, A. A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol. 1952, 117, 500–544. [Google Scholar] [CrossRef] [PubMed]
  18. Chua, L.; Sbitnev, V.; Kim, H. Hodgkin–Huxley axon is made of memristors. Int. J. Bifurc. Chaos 2012, 22, 1230011. [Google Scholar] [CrossRef]
  19. Chua, L.; Sbitnev, V.; Kim, H. Neurons Are Poised Near The Edge Of Chaos. Int. J. Bifurc. Chaos 2012, 22, 1250098. [Google Scholar] [CrossRef]
  20. Sah, M.; Kim, H.; Chua, L. Brains Are Made of Memristors. IEEE Circuits Syst. Mag. 2014, 14, 12–36. [Google Scholar] [CrossRef]
  21. Chaisson, E.J. Cosmic Evolution—Biological, Havard University Course Syllabus, version 7; NIH The National Center for Biotechnology Information: Bethesda, MD, USA, 2012. [Google Scholar]
  22. Wang, F.Z.; Helian, N.; Wu, S.; Yang, X.; Guo, Y.; Lim, G.; Rashid, M.M. Delayed switching applied to memristor neural networks. J. Appl. Phys. 2012, 111, 07E317. [Google Scholar] [CrossRef]
  23. Wang, F.Z.; Helian, N.; Wu, S.; Lim, M.G.; Guo, Y.; Parker, M.A. Delayed Switching in Memristors and Memristive Systems. IEEE Electron Device Lett. 2010, 31, 755–757. [Google Scholar] [CrossRef] [Green Version]
  24. Turing, A.M. On Computable Numbers, with an Application to the Entscheidungs problem. Proc. Lond. Math. Soc. 1936, 42, 230–265. [Google Scholar]
  25. Post, E. Recursive Unsolvability of a Problem of Thue. J. Symb. Log. 1947, 12, 1–11. [Google Scholar] [CrossRef] [Green Version]
  26. Turing, A.M. Intelligent Machinery; University Park Press: Baltimore, UK, 1968; p. 31. [Google Scholar]
  27. Hennie, F.C.; Stearns, F.C. Two-tape simulation of multitape Turing machines. JACM 1966, 13, 533–546. [Google Scholar] [CrossRef]
  28. von Neumann, J. First Draft of a Report on the EDVAC (PDF), 1945. Available online: http://abelgo.cn/cs101/papers/Neumann.pdf (accessed on 8 January 2023).
  29. Markgraf, J.D. The Von Neumann Bottleneck. Available online: https://web.archive.org/web/20131212205159/http://aws.linnbenton.edu/cs271c/markgrj/ (accessed on 8 January 2023).
  30. MFTL (My Favorite Toy Language) Entry Jargon File 4.4.7. Available online: http://catb.org/jargon/html/ (accessed on 8 January 2023).
  31. Copeland, B. (Ed.) The Essential Turing: Seminal Writings in Computing, Logic, Philosophy, Artificial Intelligence, and Artificial Life plus The Secrets of Enigma; Clarendon Press (Oxford University Press): Oxford, UK, 2004. [Google Scholar]
  32. Encephalo-Etymology Online Etymology Dictionary. Available online: https://www.etymonline.com/ (accessed on 8 January 2023).
  33. Parent, A.; Carpenter, M.B. Ch. 1. Carpenter’s Human Neuroanatomy; Williams & Wilkins: Baltimore, MD, USA, 1995; ISBN 978-0-683-06752-1. [Google Scholar]
  34. Bigos, K.L.; Hariri, A.; Weinberger, D. Neuroimaging Genetics: Principles and Practices; Oxford University Press: Oxford, UK, 2015; p. 157. ISBN 978-0-19-992022-8. [Google Scholar]
  35. Cosgrove, K.P.; Mazure, C.M.; Staley, J.K. Evolving knowledge of sex differences in brain structure, function, and chemistry. Biol. Psychiatry 2007, 62, 847–855. [Google Scholar] [CrossRef] [Green Version]
  36. Molina, D. Kimberley; DiMaio, Vincent J.M. Normal Organ Weights in Men. Am. J. Forensic Med. Pathol. 2012, 33, 368–372. [Google Scholar] [CrossRef]
  37. “Amoeba” Archived at the Wayback Machine. Available online: Oxforddictionaries.com (accessed on 8 January 2023).
  38. Singleton, P. Dictionary of Microbiology and Molecular Biology, 3rd ed.; John Wiley & Sons: Chichester, UK, 2006; p. 32. ISBN 978-0-470-03545-0. [Google Scholar]
  39. Patterson, D.J. Amoebae: Protists Which Move and Feed Using Pseudopodia. Tree of Life Web Project. Available online: http://tolweb.org/accessory/Amoebae?acc_id=51 (accessed on 8 January 2023).
  40. “The Amoebae” The University of Edinburgh. Available online: https://maciverlab.bms.ed.ac.uk/amoebae.htm (accessed on 8 January 2023).
  41. van Egmond, W. Sun Animalcules and Amoebas. Microscopy-UK. Available online: http://www.microscopy-uk.org.uk/mag/indexmag.html?http://www.microscopy-uk.org.uk/mag/wimsmall/sundr.html (accessed on 8 January 2023).
  42. Nelson, M.E. Electrophysiological Models. In Databasing the Brain: From Data to Knowledge; Koslow, S., Subramaniam, S., Eds.; Wiley: New York, NY, USA, 2005; pp. 285–301. [Google Scholar]
  43. Gray, D.J.; Wu, S.M. Foundations of Cellular Neurophysiology, 3rd ed.; MIT Press: Cambridge, MA, USA, 1997; ISBN 978-0-262-10053-3. [Google Scholar]
  44. Krapivin, V.F.; Varotsos, C.A.; Soldatov, V.Y. New Ecoinformatics Tools in Environmental Science: Applications and Decision-Making; Springer: New York, NY, USA, 2015; pp. 37–38. ISBN 9783319139784. [Google Scholar]
  45. Rakowski, R.F.; Gadsby, D.C.; De Weer, P. Stoichiometry and voltage dependence of the sodium pump in voltage-clamped, internally dialyzed squid giant axon. J. Gen. Physiol. 1989, 93, 903–941. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  46. Hille, B. Ion Channels of Excitable Membranes, 3rd ed.; Sinauer: Sunderland, MA, USA, 2001; ISBN 978-0-87893-321-1. [Google Scholar]
  47. Porteus, S.D. Primitive Intelligence and Environment. Nature 1938, 142, 774. [Google Scholar] [CrossRef]
  48. Sample, I. Primitive intelligence, New Scientist, 27 September 2000. Available online: https://www.newscientist.com/article/dn25-primitive-intelligence/ (accessed on 8 January 2023).
  49. Näätänen, R.; Tervaniemi, M.; Sussman, E.; Paavilainen, P.; Winkler, I. Primitive intelligence in the auditory cortex. Trends Neurosci. 2001, 24, 283–288. [Google Scholar] [CrossRef] [PubMed]
  50. Kindratenko, V. Novel Computing Architectures. Comput. Sci. Eng. 2009, 11, 54–57. [Google Scholar] [CrossRef]
  51. Novel Computing Platforms and Information Processing Approaches. Available online: https://csl.illinois.edu/research/impact-areas/health-it/novel-computing-platforms-and-information-processing-approaches (accessed on 6 February 2023).
Figure 1. In this study, we found that a coil with a magnetic core is, in fact, an inductor with memory (meminductor) in terms of its inductance being a function of the charge. The Oersted field generated by the current i rotates or switches the magnetization M inside the core and consequently the switched flux φ induces a voltage v across the conductor. The history of the current passing through the coil [ i ( t ) d t = q ( t ) ] is remembered by the magnetization inside the magnetic core.
Figure 1. In this study, we found that a coil with a magnetic core is, in fact, an inductor with memory (meminductor) in terms of its inductance being a function of the charge. The Oersted field generated by the current i rotates or switches the magnetization M inside the core and consequently the switched flux φ induces a voltage v across the conductor. The history of the current passing through the coil [ i ( t ) d t = q ( t ) ] is remembered by the magnetization inside the magnetic core.
Micromachines 14 00486 g001
Figure 2. The rotational model used in the coil core structure. If the magnetic field H is applied in the Z direction, the saturation magnetization vector MS(t) follows a precession trajectory (blue) from its initial position (θ0π, m0 ≈ −1) and the angle θ decreases with time continuously until (θ0, m ≈ 1), i.e., the magnetization MS(t) reverses itself and is eventually aligned with the magnetic field H.
Figure 2. The rotational model used in the coil core structure. If the magnetic field H is applied in the Z direction, the saturation magnetization vector MS(t) follows a precession trajectory (blue) from its initial position (θ0π, m0 ≈ −1) and the angle θ decreases with time continuously until (θ0, m ≈ 1), i.e., the magnetization MS(t) reverses itself and is eventually aligned with the magnetic field H.
Micromachines 14 00486 g002
Figure 3. The constitutional ρ–q curve of the meminductor. It complies with the three criteria for the ideality of an ideal circuit element with memory [10,11]: a. nonlinear; b. continuously differentiable; and c. strictly monotonically increasing. With the accumulation of the charge, L ( q ) = d ρ d q decreases like a staircase.
Figure 3. The constitutional ρ–q curve of the meminductor. It complies with the three criteria for the ideality of an ideal circuit element with memory [10,11]: a. nonlinear; b. continuously differentiable; and c. strictly monotonically increasing. With the accumulation of the charge, L ( q ) = d ρ d q decreases like a staircase.
Micromachines 14 00486 g003
Figure 4. The m–H hysteresis loops simulated by the rotational model. The solid line in red represents a gradual m(t) rotation (with a finite slope) under H ( t ) i ( t ) = I 0 sin ω t , m 0 = ± 0.99 . The dashed line in blue represents a fast m(t) rotation (with an infinite slope) under a step-function H.
Figure 4. The m–H hysteresis loops simulated by the rotational model. The solid line in red represents a gradual m(t) rotation (with a finite slope) under H ( t ) i ( t ) = I 0 sin ω t , m 0 = ± 0.99 . The dashed line in blue represents a fast m(t) rotation (with an infinite slope) under a step-function H.
Micromachines 14 00486 g004aMicromachines 14 00486 g004b
Figure 5. A typical m–H loop of real-world magnetic materials [12].
Figure 5. A typical m–H loop of real-world magnetic materials [12].
Micromachines 14 00486 g005
Figure 6. The simulated m–H loop based on m = tan h ( A ( H ± H C ) ) ] (HC is the coercive force) with a sine-wave input current. Two tanh values are used, and a horizontal shift is applied to each branch to obtain hysteresis.
Figure 6. The simulated m–H loop based on m = tan h ( A ( H ± H C ) ) ] (HC is the coercive force) with a sine-wave input current. Two tanh values are used, and a horizontal shift is applied to each branch to obtain hysteresis.
Micromachines 14 00486 g006
Figure 7. An RLC neuromorphic circuit using a coil core meminductor, L(q), to scan a frequency range. An amoeba’s behavior is simulated with the damped oscillations of this circuit.
Figure 7. An RLC neuromorphic circuit using a coil core meminductor, L(q), to scan a frequency range. An amoeba’s behavior is simulated with the damped oscillations of this circuit.
Micromachines 14 00486 g007
Figure 8. Simulated and experimental responses of the neuromorphic circuit. L [ q ( t ) ] = L [ i ( t ) d t ] starts at 2 H and then decreases by 20% after each stimulus pulse. The circuit’s resonance frequency, determined by the staircased L(q) (Figure 3), increases itself with the increased number of oncoming stimulus pulses. This simulation in (a) agrees with our experiment in (b) on a hardware emulator built with a dsPIC30F2011 microcontroller, an MCP4261 digital potentiometer and a differential 12-bit ADC converter.
Figure 8. Simulated and experimental responses of the neuromorphic circuit. L [ q ( t ) ] = L [ i ( t ) d t ] starts at 2 H and then decreases by 20% after each stimulus pulse. The circuit’s resonance frequency, determined by the staircased L(q) (Figure 3), increases itself with the increased number of oncoming stimulus pulses. This simulation in (a) agrees with our experiment in (b) on a hardware emulator built with a dsPIC30F2011 microcontroller, an MCP4261 digital potentiometer and a differential 12-bit ADC converter.
Micromachines 14 00486 g008
Figure 9. The neuromorphic RLC circuit in Figure 7 is a deep learning neural network with multiple layers between the input and output layers. The complicated function e α t sin 2 π f 0 t is decomposed into two simple functions: e α t and sin 2 π f 0 t , each of which can be implemented in one layer. The former is determined by R and L(q), whereas the latter is determined by L(q) and C.
Figure 9. The neuromorphic RLC circuit in Figure 7 is a deep learning neural network with multiple layers between the input and output layers. The complicated function e α t sin 2 π f 0 t is decomposed into two simple functions: e α t and sin 2 π f 0 t , each of which can be implemented in one layer. The former is determined by R and L(q), whereas the latter is determined by L(q) and C.
Micromachines 14 00486 g009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, F.Z. Beyond Memristors: Neuromorphic Computing Using Meminductors. Micromachines 2023, 14, 486. https://doi.org/10.3390/mi14020486

AMA Style

Wang FZ. Beyond Memristors: Neuromorphic Computing Using Meminductors. Micromachines. 2023; 14(2):486. https://doi.org/10.3390/mi14020486

Chicago/Turabian Style

Wang, Frank Zhigang. 2023. "Beyond Memristors: Neuromorphic Computing Using Meminductors" Micromachines 14, no. 2: 486. https://doi.org/10.3390/mi14020486

APA Style

Wang, F. Z. (2023). Beyond Memristors: Neuromorphic Computing Using Meminductors. Micromachines, 14(2), 486. https://doi.org/10.3390/mi14020486

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop