Cannabis Ruderalis

This sandbox is in the article namespace. Either move this page into your userspace, or remove the {{User sandbox}} template.

Professor
Ido Kanter
Professor Ido Kanter
Born (1959-11-21) November 21, 1959 (age 64)
NationalityIsraeli
CitizenshipIsrael
Alma materBar-Ilan University
Awards
Weizmann Postdoctoral Fellowship (1988-1989)

Humboldt Senior Research Prize (2001)

Scientific career
Fields
  • Theory of neural networks
  • Physical random number generators
  • Neuroscience in-vitro
  • Deep learning
  • Synchronization of neurons and lasers
  • Neural cryptography
InstitutionsPostdoc: Princeton University, with P. W. Anderson
Doctoral advisorHaim Sompolinsky


Ido Kanter (born: 21 Nov. 1959) is an Israeli professor of physics and the head of the Lab for Reverberating Modes in Neural Networks at the Gonda Brain Research Center at Bar-Ilan University. He specializes in models of disorder magnetic systems, physical random number generators, theory of neural networks, deep learning and synchronization among neurons and lasers, documented in more than 200 publications.


His research on the behavior of artificial networks led him to conduct biological experiments. He made the very unusual transition from theoretical physics to experimental neuroscience as a full professor in 2012, following the teaching of Professor Phil Anderson, a Nobel laureate, his supervisor during his postdoc at Princeton University: Follow your dreams.



Education

Ido Kanter was born and raised in Rehovot, Israel and served in the Israeli Defense Force from 1978 to 1981. Kanter graduated summa cum laude from Bar-Ilan University with a bachelor's degree in physics and computer science in 1983. In 1987, he received his direct Ph.D. from Bar-Ilan University with his thesis, “Theory of Spin Glasses and its Applications to Complex Problems in Mathematics and Biology,” under the supervision of Professor Haim Sompolinsky.

Academic Career

After completing his Ph.D., Kanter joined Professor Phil W. Anderson’s group at Princeton University as a visiting research fellow (1988-1989). He was also a visiting research fellow at AT&T Bell Labs, collaborating with Yann LeCun (1989). In 1989, Kanter became a senior lecturer at Bar-Ilan University. He became an associate professor in 1991 at age 31 and a full professor in 1996.


Research

The main discoveries and achievements of Kanter's research include:


1. Ultra-fast Physical Random Number Generators

Scheme of a chaotic laser (Top) and a random number generator (Bottom).

Random bit sequences are a necessary element in many aspects of our life including secure communication, Monte Carlo simulations, stochastic modeling, and on-line-gaming. For decades, they were generated at high data rates using deterministic algorithms based on a secret seed, called pseudorandom bit generators. However, their unpredictability is limited by their deterministic origin. Nondeterministic random bit generators (RBGs) rely on stochastic physical processes, though their generation rates were below 100 Mbit/s, a few orders of magnitude below that of the deterministic algorithms, until 2008. The feasibility of fast stochastic physical RBGs was an open question for decades. A breakthrough occurred in late 2008 when Professor Atsushi Uchida combined binary digitization of two independent chaotic semiconductor lasers (SLs), achieving 1.7 Gbit/s RBG. Months later, a 12.5 Gbit/s RBG based on a single chaotic SL was achieved by Kanter et al.[1], and in late 2009, its generation rate was increased to 300 Gbit/s[2]. This method was robust to perturbations and control parameters and is used in many applications, including on chips.


2. The New Neuron

Dendritic learning as an alternative to synaptic plasticity (with audio)

Neurons are the basic computational building blocks that compose the brain. According to the neuronal computational scheme, which has been used since the beginning of the 20th century, each neuron functions as a centralized excitable element. The neuron accumulates its incoming electrical signals from connecting neurons through several terminals (dendrites) and generates a short electrical pulse, known as a spike, when its threshold is crossed. Using new types of experiments on neuronal cultures, Kanter and his experimental research group have demonstrated that this century-old assumption regarding brain activity is mistaken. They showed that each neuron functions as a collection of excitable elements[3], where each excitable element is sensitive to the directionality of the origin of the input signal. Two weak inputs from different directions (different dendrites) will not sum up to generate a spike, while two weak inputs from the same direction (same dendrite) will generate a spike. This neuronal anisotropic feature was experimentally extended to the following anisotropic neuronal reversible plasticity: neuronal response latency[4], response failures[5], absolute refractory periods[6][7], and spike waveforms.[3] The timescale of reversible plasticity is broadband, ranging between microseconds and tens of seconds.


3. Dendritic Learning as a Paradigm Shift in Brain Learning

In 1949, Donald Hebb's pioneering work suggested that learning occurs in the brain by modifying the strength of the synapses, whereas neurons function as the computational elements in the brain. This has remained the common assumption in neuroscience and initiated the research field of machine learning decades ago. Using new experiments focused on neuronal cultures, Kanter’s research group revealed a new underlying mechanism for fast (several seconds) brain learning process, dendritic learning[8][9][10]. This fast mechanism opposed the previous common belief based solely on slow (tens of minutes) synaptic plasticity. The presented paradigm indicates that fast dendritic learning occurs in closer proximity to the neuron, the computational unit. It presents a new type of abundant cooperative nonlinear dynamics, as effectively all incoming synapses to an updated dendrite concurrently undergo the same adaptation. In addition, dendritic strengths are self-oscillating[11], and weak synapses, which comprise the majority of our brain and were previously assumed to be insignificant, may play a key role in brain activity.


4. The Inverse Problem

Inverse problem scheme (blue).

The usual procedure of statistical physics is aimed at the calculation of macroscopic quantities (such as pressure and magnetization) on the basis of model parameters (Hamiltonian). In the inverse problem this procedure is reversed; one seeks for the existence of parameters of a model based on the required statistical properties. Kanter was a pioneer of this field of research. In 1994, he asked the question, "Do classical spin systems with the same metastable states have identical Hamiltonians?"[12]. A year later he generalized this method to neural networks[13] and then to the Simplex method[14]. These types of inverse problems opened a new research horizon in statistical mechanics, followed by numerous physical system analyses and interdisciplinary directions.


5. Shannon Meets Carnot: Generalized Second Thermodynamic Law

Top: A temperature-dependent spring constant at High/Cold (TH/TC) temperatures. Bottom: Carnot cycle (black), information heat-engine (red).

The laws of thermodynamics describe the transport of heat and work in macroscopic processes and play a fundamental role in the physical sciences. In particular, the second thermodynamic law linearly relates the change in the entropy, dS, to the amount of heat, dQ, absorbed by a system at equilibrium, dQ = TdS, thus defining the temperature, T, of the system. The prior scientific belief was that information theory is primarily a mathematical creature and has its own vitality independent of the physical laws of nature. Kanter, together with his Ph.D student Ori Shental, were pioneers in bridging between Shannon theory and the second law of thermodynamics[15] and in fusing together statistical physics and information and communication theories[16]. In this line of work, Shental and Kanter had generalized the second thermodynamic law to encompass systems with temperature-dependent Hamiltonians and obtained the generalized thermodynamic law , where < · > denotes averaging over the Boltzmann distribution.
This generalized second law of thermodynamics reveals a new definition to the basic notion of temperature and suggests an information heat engine[16]. Interestingly, it provides a quantitative bridge between the realm of thermodynamics and information theory in the context of communication channels, such as the popular Gaussian and binary symmetric channels. Therefore, purely information-theoretic measures such as entropy, mutual information, and channel capacity, are correctly re-derived from thermodynamics.


6. Zero-lag Synchronization and the Greatest Common Divisor of Network Loops

Nodes with the same color are in zero-lag synchronization, GCD=1 (Top) GCD=5 (Bottom)

The emergence of zero-lag synchronization among distant excitable or chaotic units, without a common input, remained a puzzle for many years. Between 2008-2010, a new non-local mechanism for zero-lag synchronization of a network composed of chaotic units with time-delay couplings was presented by Kanter and Professor Wolfgang Kinzel. The non-local mechanism is the greatest common divisor (GCD) of network's loops[17]. For GCD=1 of network's loops, all units are in zero-lag; for GCD>1, the network splits into GCD-clusters in which clustered units are in zero-lag synchronization. These results are supported by simulations of chaotic networks, mixing arguments, and analytical solutions. The zero-lag synchronization of distant systems and the non-local GCD mechanism were experimentally tested on the reverberating activity patterns embedded in networks of cortical neurons[18][19], controlling synchronization in large laser networks[20][21], synchronization in small networks of time-delay coupled chaotic diode lasers[22], and in chaos synchronization in networks of semiconductor superlattices[23].


7. Neural Cryptography

Synchronized weights (key) between two parity machines by mutual communication.

A bridge between the synchronization of mutual coupling of two feedforward networks[24] and the generation of symmetric key-exchange protocol over a public channel was established by Kanter and Professor Wolfgang Kinzel in 2001[25]. A passive attacker who knows the protocol and all details of any transmission of the data will find it difficult to reveal the mutual key. Simulations and analytical work indicate that synchronization is superior to tracking by a passive attacker. This type of key-exchange protocol was extended to the synchronization process of two mutually delayed coupled deterministic chaotic maps[26][27] and was examined experimentally using two mutual chaotic semiconductor lasers[28][29][30]. The task of a passive attacker is mapped in some limits onto Hilbert’s tenth problem[26], solving a set of nonlinear Diophantine equations, which was proven to be in the class of NP-complete problems. This bridge between nonlinear dynamics and NP-complete problems opens a horizon for new types of secure public-channel protocols.


8. Physics Assists with Key Challenges in Artificial Intelligence

The brain is a source for a new AI mechanism (with audio)

From 2020-2023, Kanter's research shed light on the following fundamental theoretical questions regarding deep learning (DL):

(a) Is shallow learning equivalent to DL? Is DL a necessary ingredient for artificial intelligence (AI)?[31]

(b) What is the best location of the pooling operators to enhance accuracy?[32]

(c) Is brain learning, based on tree architectures only, weaker than AI?[33]

(d) Is there a universal law regarding how to efficiently build DL architectures?[34]

(e) Can error rates follow a universal law as a function of dataset sizes?[33]

(f) What is the mechanism underlying DL?[35]

External Links

1. Ido kanter's personal website: https://kanterlabsite.wixsite.com/idokanter

2. Ido Kanter's Bar-Ilan website: https://physics.biu.ac.il/en/node/578

3. Selected press releases: https://kanterlabsite.wixsite.com/idokanter/press-articles

4. Movie on “Dendritic learning as an alternative to synaptic plasticity”: https://vimeo.com/702894966

5. Movie on “Unreliable neurons improve brain functionalities and cryptography”: https://vimeo.com/752532666

6. Ido Kanter's Google scholar: https://scholar.google.com/citations?user=0MdAUb0AAAAJ&hl=en

7. Ido Kanter's ResearchGate: https://www.researchgate.net/profile/Ido-Kanter

8. Ido Kanter's Linkdin: https://www.linkedin.com/in/ido-kanter-8448a016/recent-activity/all/


References

  1. ^ Reidler, I., Aviad, Y., Rosenbluh, M. & Kanter, I. Ultrahigh-speed random number generation based on a chaotic semiconductor laser. Physical review letters 103, 024102 (2009).
  2. ^ Kanter, I., Aviad, Y., Reidler, I., Cohen, E. & Rosenbluh, M. An optical ultrafast random bit generator. Nature Photonics 4, 58-61 (2010).
  3. ^ a b Sardi, S., Vardi, R., Sheinin, A., Goldental, A. & Kanter, I. New types of experiments reveal that a neuron functions as multiple independent threshold units. Sci Rep-Uk 7, 18036 (2017).
  4. ^ Vardi, R., Timor, R., Marom, S., Abeles, M. & Kanter, I. Synchronization with mismatched synaptic delays: A unique role of elastic neuronal latency. EPL (Europhysics Letters) 100, 48003 (2012).
  5. ^ Vardi, R., Goldental, A., Marmari, H., Brama, H., Stern, E. A., Sardi, S., Sabo, P. & Kanter, I. Neuronal response impedance mechanism implementing cooperative networks with low firing rates and μs precision. Frontiers in neural circuits 9, 29 (2015).
  6. ^ Sardi, S., Vardi, R., Tugendhaft, Y., Sheinin, A., Goldental, A. & Kanter, I. Long anisotropic absolute refractory periods with rapid rise times to reliable responsiveness. Physical Review E 105, 014401 (2022).
  7. ^ Vardi, R., Tugendhaft, Y., Sardi, S. & Kanter, I. Significant anisotropic neuronal refractory period plasticity. EPL (Europhysics Letters) 134, 60007 (2021).
  8. ^ Sardi, S., Vardi, R., Meir, Y., Tugendhaft, Y., Hodassman, S., Goldental, A. & Kanter, I. Brain experiments imply adaptation mechanisms which outperform common AI learning algorithms. Sci Rep-Uk 10, 1-10 (2020).
  9. ^ Sardi, S., Vardi, R., Goldental, A., Sheinin, A., Uzan, H. & Kanter, I. Adaptive nodes enrich nonlinear cooperative learning beyond traditional adaptation by links. Sci Rep-Uk 8, 5100, doi:10.1038/s41598-018-23471-7 (2018).
  10. ^ Sardi, S., Vardi, R., Goldental, A., Tugendhaft, Y., Uzan, H. & Kanter, I. Dendritic learning as a paradigm shift in brain learning. ACS chemical neuroscience 9, 1230-1232 (2018).
  11. ^ Uzan, H., Sardi, S., Goldental, A., Vardi, R. & Kanter, I. Stationary log-normal distribution of weights stems from spontaneous ordering in adaptive node networks. Sci Rep-Uk 8, 13091 (2018).
  12. ^ Kanter, I. & Gotesdyner, R. Do classical spin systems with the same metastable states have identical Hamiltonians? Physical review letters 72, 2678 (1994).
  13. ^ Kanter, I., Kessler, D., Priel, A. & Eisenstein, E. Analytical study of time series generation by feed-forward networks. Physical review letters 75, 2614 (1995).
  14. ^ Keren, S., Kfir, H. & Kanter, I. Possible sets of autocorrelations and the simplex algorithm. Journal of Physics A: Mathematical and General 39, 4161 (2006).
  15. ^ Shental, O. & Kanter, I. Shannon meets Carnot: Generalized second thermodynamic law. Europhysics Letters 85, 10006 (2009).
  16. ^ a b Peleg, Y., Efraim, H., Shental, O. & Kanter, I. Mutual information via thermodynamics: three different approaches. Journal of Statistical Mechanics: Theory and Experiment 2010, P01014 (2010).
  17. ^ Kanter, I., Kopelowitz, E., Vardi, R., Zigzag, M., Kinzel, W., Abeles, M. & Cohen, D. Nonlocal mechanism for cluster synchronization in neural circuits. Europhysics Letters 93, 66001 (2011).
  18. ^ Vardi, R., Wallach, A., Kopelowitz, E., Abeles, M., Marom, S. & Kanter, I. Synthetic reverberating activity patterns embedded in networks of cortical neurons. Europhysics Letters 97, 66002 (2012).
  19. ^ Vardi, R., Timor, R., Marom, S., Abeles, M. & Kanter, I. Synchronization with mismatched synaptic delays: A unique role of elastic neuronal latency. Europhysics Letters 100, 48003 (2012).
  20. ^ Nixon, M., Fridman, M., Ronen, E., Friesem, A. A., Davidson, N. & Kanter, I. Controlling synchronization in large laser networks. Physical review letters 108, 214101 (2012).
  21. ^ Nixon, M., Friedman, M., Ronen, E., Friesem, A. A., Davidson, N. & Kanter, I. Synchronized cluster formation in coupled laser networks. Physical review letters 106, 223901 (2011).
  22. ^ Aviad, Y., Reidler, I., Zigzag, M., Rosenbluh, M. & Kanter, I. Synchronization in small networks of time-delay coupled chaotic diode lasers. Opt Express 20, 4352-4359 (2012).
  23. ^ Li, W., Aviad, Y., Reidler, I., Song, H., Huang, Y., Biermann, K., Rosenbluh, M., Zhang, Y., Grahn, H. T. & Kanter, I. Chaos synchronization in networks of semiconductor superlattices. Europhysics Letters 112, 30007 (2015).
  24. ^ Kinzel, W., Metzler, R. & Kanter, I. Dynamics of interacting neural networks. Journal of Physics A: Mathematical and General 33, L141 (2000).
  25. ^ Kinzel, W. & Kanter, I. Interacting neural networks and cryptography, in Advances in solid state physics 383-391 (Springer, 2002).
  26. ^ a b Kanter, I., Kopelowitz, E. & Kinzel, W. Public channel cryptography: chaos synchronization and Hilbert’s tenth problem. Phys Rev Lett 101, 084102 (2008).
  27. ^ Mislovaty, R., Klein, E., Kanter, I. & Kinzel, W. Public channel cryptography by synchronization of neural networks and chaotic maps. Physical review letters 91, 118701 (2003).
  28. ^ Klein, E., Gross, N., Rosenbluh, M., Kinzel, W., Khaykovich, L. & Kanter, I. Stable isochronal synchronization of mutually coupled chaotic lasers. Phys Rev E 73, 066214 (2006).
  29. ^ Kanter, I., Gross, N., Klein, E., Kopelowitz, E., Yoskovits, P., Khaykovich, L., Kinzel, W. & Rosenbluh, M. Synchronization of mutually coupled chaotic lasers in the presence of a shutter. Physical review letters 98, 154101 (2007).
  30. ^ Kanter, I., Butkovski, M., Peleg, Y., Zigzag, M., Aviad, Y., Reidler, I., Rosenbluh, M. & Kinzel, W. Synchronization of random bit generators based on coupled chaotic lasers and application to cryptography. Opt Express 18, 18292-18302 (2010).
  31. ^ Meir, Y., Tevet, O., Tzach, Y., Hodassman, S., Gross, R. D. & Kanter, I. Efficient shallow learning as an alternative to deep learning. Scientific Reports 13, 5423 (2023).
  32. ^ Meir, Y., Tzach, Y., Gross, R. D., Tevet, O., Vardi, R. & Kanter, I. Enhancing the success rates by performing pooling decisions adjacent to the output layer. Scientific Reports https://www.nature.com/articles/S41598-023-40566-Y(2023).
  33. ^ a b Meir, Y., Ben-Noam, I., Tzach, Y., Hodassman, S. & Kanter, I. Learning on tree architectures outperforms a convolutional feedforward network. Scientific Reports 13, 962 (2023).
  34. ^ Meir, Y., Sardi, S., Hodassman, S., Kisos, K., Ben-Noam, I., Goldental, A. & Kanter, I. Power-law scaling to assist with key challenges in artificial intelligence. Scientific reports 10, 19628 (2020).
  35. ^ Tzach, Y., Meir, Y., Tevet, O., Gross, R. D., Hodassman, S., Vardi, R. & Kanter, I. The mechanism underlying successful deep learning. arXiv preprint arXiv:2305.18078 (2023).

Leave a Reply