Cannabis Ruderalis

Content deleted Content added
OffekM (talk | contribs)
No edit summary
OAbot (talk | contribs)
m Open access bot: arxiv, pmc updated in citation with #oabot.
 
(156 intermediate revisions by 20 users not shown)
Line 1: Line 1:
{{Short description|Israeli physicist}}
{{User sandbox}}
<!-- EDIT BELOW THIS LINE -->

{{Infobox scientist
{{Infobox scientist
| honorific_prefix = Professor
| honorific_prefix = Professor
Line 8: Line 6:
| native_name =
| native_name =
| native_name_lang =
| native_name_lang =
| image = Ido_Kanter_picture.jpg<!--(filename only, i.e. without "File:" prefix)-->
| image = Ido Kanter picture.jpg <!--(filename only, i.e. without "File:" prefix)-->
| image_size =
| image_size =
| image_upright =
| image_upright =
Line 25: Line 23:
| siglum =
| siglum =
| pronounce =
| pronounce =
| citizenship = [[Israel]] <!-- use only when necessary per [[WP:INFONAT]] -->
| nationality = [[Israelis|Israeli]] <!-- use only when necessary per [[WP:INFONAT]] -->
| fields = {{plainlist|*Theory of neural networks
| fields = {{plainlist|*Theory of neural networks
*Physical random number generators
*Physical random number generators
*Experimental neuroscience in-vitro
*Neuroscience in-vitro
*Deep learning
*Deep learning
*Synchronization of neurons and lasers
*Synchronization of neurons and lasers
*Neural cryptography}}
*Neural cryptography}}
| workplaces = Postdoc: [[Princeton]], with [[P. W. Anderson]]
| workplaces = Postdoc: [[Princeton University]], with [[P. W. Anderson]]
| patrons =
| patrons =
| education =
| education =
Line 40: Line 36:
| thesis_url = <!--(or | thesis1_url = and | thesis2_url = )-->
| thesis_url = <!--(or | thesis1_url = and | thesis2_url = )-->
| thesis_year = <!--(or | thesis1_year = and | thesis2_year = )-->
| thesis_year = <!--(or | thesis1_year = and | thesis2_year = )-->
| doctoral_advisor = [[Haim Sompolinsly]] <!--(or | doctoral_advisors = )-->
| doctoral_advisor = [[Haim Sompolinsky]] <!--(or | doctoral_advisors = )-->
| academic_advisors =
| academic_advisors =
| doctoral_students =
| doctoral_students =
Line 47: Line 43:
| influences =
| influences =
| influenced =
| influenced =
| awards = {{plainlist|Weizmann Postdoctoral Fellowship (1988-1989)
| awards = {{plainlist|Weizmann Postdoctoral Fellowship (1988-1989)
Humboldt Senior Research Prize (2001)}}
Humboldt Senior Research Prize (2001)}}
| author_abbrev_bot =
| author_abbrev_bot =
| author_abbrev_zoo =
| author_abbrev_zoo =
Line 64: Line 60:
| footnotes =
| footnotes =
}}
}}
[[File:Ido Kanter Experimental setup.jpg|thumb]]
Ido Kanter (born: 21 Nov. 1959) is an Israeli professor of [[physics]] and the head of the Lab for Reverberating Modes in Neural Networks at the Gonda Brain Research Center at [[Bar-Ilan University]]. He specializes in models of disorder magnetic systems, physical random number generators, theory of [[neural networks]], [[deep learning]] and [[synchronization]] among [[neurons]] and [[lasers]], documented in more than 200 publications.
His research on the behavior of artificial networks led him to conduct biological experiments. He made the very unusual transition from theoretical physics to experimental neuroscience as a full professor in 2012, following the teaching of Professor [[P. W. Anderson|Phil Anderson]], a [[Nobel prize|Nobel laureate]], his supervisor during his postdoc at Princeton University: Follow your dreams.


'''Ido Kanter''' (born: 21 Nov. 1959) is an Israeli professor of [[physics]] at and the head of the Lab for Reverberating Modes in Neural Networks at the Gonda Brain Research Center at [[Bar-Ilan University]]. He specializes in models of disorder magnetic systems, physical random number generators, theory of [[neural networks]], [[deep learning]] and [[synchronization]] among [[neurons]] and [[lasers]].
[[File:Ido Kanter Experimental setup.jpg|thumb|303px]]


==Early life and education==
==Biography==
Kanter was born and raised in [[Rehovot]], [[Israel]] and served in the [[Israeli Defense Force]] from 1978 to 1981.<ref name="About me - contributions"/>
==Education==
Ido Kanter was born and raised in [[Rehovot]], [[Israel]] and served in the [[Israeli Defense Force]] from 1978 to 1981. Kanter graduated summa cum laude from [[Bar-Ilan University]] with a bachelor's degree in [[physics]] and [[computer science]] in 1983. In 1987, he received his direct [[Ph.D.]] from Bar-Ilan University with his [[thesis]], “Theory of Spin Glasses and its Applications to Complex Problems in Mathematics and Biology,” under the supervision of Professor Haim Sompolinsky.


He attended [[Bar-Ilan University]] and graduated with a bachelor's degree in [[physics]] and [[computer science]] in 1983. In 1987, he received his [[Ph.D.]] from Bar-Ilan University. His [[thesis]] was ''Theory of Spin Glasses and its Applications to Complex Problems in Mathematics and Biology,'' under the supervision of Professor [[Haim Sompolinsky]].<ref name="About me - contributions">{{cite web |title=About me |url=https://kanterlabsite.wixsite.com/idokanter/about-me |publisher=Kanter Lab |at=Download Main Contributions |access-date=25 April 2024}}</ref>
==Academic Career==
After completing his Ph.D., Kanter joined Professor [[P. W. Anderson|Phil W. Anderson’s]] group at [[Princeton University]] as a visiting research fellow (1988-1989). He was also a visiting research fellow at [[AT&T Bell Labs]], collaborating with Yann le Cun (1989).
In 1989, Kanter became a senior lecturer at Bar-Ilan University. He became an associate professor in 1991 at age 31 and a full professor in 1996.


He was a visiting research fellow at Princeton University from 1988 to 1989, working with [[Phil W. Anderson]]. He was also a visiting research fellow at AT&T Bell Labs, with [[Yann LeCun|Yann le Cun]], then 1989 joined the physics department at Bar-Ilan University in 1989.<ref name="About me - contributions"/>
==Research==
==Research==
Ido Kanter specializes in models of disorder magnetic systems, ultrafast physical random number generators, theory of neural networks, neural cryptography, deep learning and synchronization among neurons and lasers and experimental and theoretical neuroscience, documented in more than 220 publications.<ref>[https://scholar.google.com/citations?user=0MdAUb0AAAAJ&hl=en Ido Kanter's Google Scholar profile]</ref>
The main discoveries and achievements of Kanter's research include:


==Main contributions==
[[File:New brain learning.webm|thumb|Dendritic learning as an alternative to synaptic plasticity (with audio)|thumbtime=0|289x289px]]Using a combination of theoretical and experimental methods,<ref>{{cite web | url=https://kanterlabsite.wixsite.com/idokanter/about-me | title=About Me }}</ref> Kanter has made contributions to various fields ranging from statistical physics and communication to neural cryptography and neuroscience.<ref>{{cite web | url=https://physics.biu.ac.il/en/node/578 | title=Kanter Ido &#124; Department of Physics }}</ref> These include work on a field of statistical physics known as the inverse problem,<ref>{{cite journal | last=Kanter | first=I. | last2=Gotesdyner | first2=R. | title=Do classical spin systems with the same metastable states have identical Hamiltonians? | journal=Physical Review Letters | volume=72 | issue=17 | date=1994 | doi=10.1103/PhysRevLett.72.2678 | pages=2678–2681}}</ref> bridging between Shannon theory and the second thermodynamic law,<ref>{{cite journal | last=Shental | first=O. | last2=Kanter | first2=I. | title=Shannon meets Carnot: Generalized second thermodynamic law | journal=EPL (Europhysics Letters) | volume=85 | issue=1 | date=2009 | doi=10.1209/0295-5075/85/10006 | page=10006| arxiv=0806.3763 }}</ref> presenting a cryptographic key exchange protocol based on neural networks,<ref>{{cite journal | last=Kanter | first=Ido | last2=Kopelowitz | first2=Evi | last3=Kinzel | first3=Wolfgang | title=Public Channel Cryptography: Chaos Synchronization and Hilbert’s Tenth Problem | journal=Physical Review Letters | volume=101 | issue=8 | date=2008 | doi=10.1103/PhysRevLett.101.084102| arxiv=0806.0931 }}</ref> and creating an ultrafast non-deterministic random bit generator (RBG).<ref>{{cite journal | last1=Kanter | first1=Ido | last2=Aviad | first2=Yaara | last3=Reidler | first3=Igor | last4=Cohen | first4=Elad | last5=Rosenbluh | first5=Michael | title=An optical ultrafast random bit generator | journal=Nature Photonics | volume=4 | issue=1 | date=2010 | doi=10.1038/nphoton.2009.235 | pages=58–61| bibcode=2010NaPho...4...58K }}</ref>


Kanter is currently focusing on the field of experimental and theoretical neuroscience, Kanter studies a variety of topics including the new neuron,<ref>{{cite journal | last=Sardi | first=Shira | last2=Vardi | first2=Roni | last3=Sheinin | first3=Anton | last4=Goldental | first4=Amir | last5=Kanter | first5=Ido | title=New Types of Experiments Reveal that a Neuron Functions as Multiple Independent Threshold Units | journal=Scientific Reports | publisher=Springer Science and Business Media LLC | volume=7 | issue=1 | date=2017 | doi=10.1038/s41598-017-18363-1| pmc=5740076 }}</ref> dendritic learning,<ref>{{cite journal | last=Sardi | first=Shira | last2=Vardi | first2=Roni | last3=Goldental | first3=Amir | last4=Tugendhaft | first4=Yael | last5=Uzan | first5=Herut | last6=Kanter | first6=Ido | title=Dendritic Learning as a Paradigm Shift in Brain Learning | journal=ACS Chemical Neuroscience | volume=9 | issue=6 | date=2018 | doi=10.1021/acschemneuro.8b00204 | pages=1230–1232}}</ref> neural interfaces, and machine learning.<ref>{{cite web | url=https://gondabrain.biu.ac.il/en/node/317 | title=Reverberating Modes in Neural Networks &#124; the Gonda Multidisciplinary Brain Research Center }}</ref>
'''1. Ultra-fast Physical Random Number Generators'''
[[File:Random number generator.jpg|thumb|Scheme of a chaotic laser (Upper) and a random number generator (Lower).]]
Random bit sequences are a necessary element in many aspects of our life including secure [[communication]], [[Monte Carlo simulations]], [[stochastic modeling]], and [[on-line-gaming]]. For decades, they were generated at high data rates using deterministic algorithms based on a secret seed, called pseudorandom bit generators. However, their unpredictability is limited by their deterministic origin. [[Nondeterministic random bit generators (RBGs)]] rely on stochastic physical processes, though their generation rates were below 100 Mbit/s, a few orders of magnitude below that of the deterministic algorithms, until 2008. The feasibility of fast stochastic physical RBGs was an open question for decades. A breakthrough occurred in late 2008 when Professor Atsushi Uchida combined binary digitization of two independent chaotic semiconductor lasers (SLs), achieving 1.7 Gbit/s RBG. Months later, a 12.5 Gbit/s RBG based on a single chaotic SL was achieved by Kanter et al.<ref>Reidler, I., Aviad, Y., Rosenbluh, M. & Kanter, I. Ultrahigh-speed random number generation based on a chaotic semiconductor laser. Physical review letters 103, 024102 (2009).</ref>, and in late 2009, its generation rate was increased to 300 Gbit/s<ref>Kanter, I., Aviad, Y., Reidler, I., Cohen, E. & Rosenbluh, M. An optical ultrafast random bit generator. Nature Photonics 4, 58-61 (2010).</ref>. This method was robust to perturbations and control parameters and is used in many applications, including on chips.


==Selected publications==
* {{cite journal | last=Gross | first=D. J. | last2=Kanter | first2=I. | last3=Sompolinsky | first3=H. | title=Mean-field theory of the Potts glass | journal=Physical Review Letters | volume=55 | issue=3 | date=1985 | doi=10.1103/PhysRevLett.55.304 | pages=304–307}}
* {{cite journal | last1=Sompolinsky | first1=H. | last2=Kanter | first2=I. | title=Temporal Association in Asymmetric Neural Networks | journal=Physical Review Letters | volume=57 | issue=22 | date=1986| doi=10.1103/PhysRevLett.57.2861 | pages=2861–2864| pmid=10033885 | bibcode=1986PhRvL..57.2861S }}
* {{cite journal | last1=Kanter | first1=I. | last2=Sompolinsky | first2=H. | title=Associative recall of memory without errors | journal=Physical Review A | volume=35 | issue=1 | date=1987| doi=10.1103/PhysRevA.35.380 | pages=380–392| pmid=9897963 | bibcode=1987PhRvA..35..380K }}
* {{cite journal | last=Kanter | first=Ido | title=Potts-glass models of neural networks | journal=Physical Review A | volume=37 | issue=7 | date=1988 | doi=10.1103/PhysRevA.37.2739 | pages=2739–2742}}
* {{cite journal | last=Kanter | first=I | last2=Kinzel | first2=W | last3=Kanter | first3=E | title=Secure exchange of information by synchronization of neural networks | journal=Europhysics Letters (EPL) | volume=57 | issue=1 | date=2002 | doi=10.1209/epl/i2002-00552-9 | pages=141–147| arxiv=cond-mat/0202112 }}
* {{cite journal | last1=Reidler | first1=I. | last2=Aviad | first2=Y. | last3=Rosenbluh | first3=M. | last4=Kanter | first4=I. | title=Ultrahigh-Speed Random Number Generation Based on a Chaotic Semiconductor Laser | journal=Physical Review Letters | volume=103 | issue=2 | date=2009 | page=024102 | doi=10.1103/PhysRevLett.103.024102| pmid=19659208 | bibcode=2009PhRvL.103b4102R }}
* {{cite journal | last1=Kanter | first1=Ido | last2=Aviad | first2=Yaara | last3=Reidler | first3=Igor | last4=Cohen | first4=Elad | last5=Rosenbluh | first5=Michael | title=An optical ultrafast random bit generator | journal=Nature Photonics | volume=4 | issue=1 | date=2010 | doi=10.1038/nphoton.2009.235 | pages=58–61| bibcode=2010NaPho...4...58K }}


==References==
'''2. The New Neuron'''
{{Reflist}}
[[File:New brain learning.jpg|thumb|Dendritic learning as an alternative to synaptic plasticity]]
Neurons are the basic computational building blocks that compose the brain. According to the neuronal computational scheme, which has been used since the beginning of the 20th century, each neuron functions as a centralized excitable element. The neuron accumulates its incoming electrical signals from connecting neurons through several terminals (dendrites) and generates a short electrical pulse, known as a spike, when its threshold is crossed.
Using new types of experiments on neuronal cultures, Kanter and his experimental research group have demonstrated that this century-old assumption regarding brain activity is mistaken. They showed that each neuron functions as a collection of excitable elements<ref>Sardi, S., Vardi, R., Sheinin, A., Goldental, A. & Kanter, I. New types of experiments reveal that a neuron functions as multiple independent threshold units. Sci Rep-Uk 7, 18036 (2017).</ref>, where each excitable element is sensitive to the directionality of the origin of the input signal. Two weak inputs from different directions (different dendrites) will not sum up to generate a spike, while two weak inputs from the same direction (same dendrite) will generate a spike.
This neuronal anisotropic feature was experimentally extended to the following anisotropic neuronal reversible plasticity: neuronal response latency<ref>Vardi, R., Timor, R., Marom, S., Abeles, M. & Kanter, I. Synchronization with mismatched synaptic delays: A unique role of elastic neuronal latency. EPL (Europhysics Letters) 100, 48003 (2012).</ref>, response failures<ref>Vardi, R., Goldental, A., Marmari, H., Brama, H., Stern, E. A., Sardi, S., Sabo, P. & Kanter, I. Neuronal response impedance mechanism implementing cooperative networks with low firing rates and μs precision. Frontiers in neural circuits 9, 29 (2015).</ref>, absolute refractory periods<ref>Sardi, S., Vardi, R., Tugendhaft, Y., Sheinin, A., Goldental, A. & Kanter, I. Long anisotropic absolute refractory periods with rapid rise times to reliable responsiveness. Physical Review E 105, 014401 (2022).</ref><ref>Vardi, R., Tugendhaft, Y., Sardi, S. & Kanter, I. Significant anisotropic neuronal refractory period plasticity. EPL (Europhysics Letters) 134, 60007 (2021).</ref>, and spike waveforms<ref>Sardi, S., Vardi, R., Sheinin, A., Goldental, A. & Kanter, I. New types of experiments reveal that a neuron functions as multiple independent threshold units. Sci Rep-Uk 7, 18036 (2017).</ref>. The timescale of reversible plasticity is broadband, ranging between microseconds and tens of seconds.


==External links==
* [https://physics.biu.ac.il/en/node/578 Faculty website]
* [https://kanterlabsite.wixsite.com/idokanter Ido Kanter Lab]


{{DEFAULTSORT:Kanter, Ido}}
'''3. Dendritic Learning as a Paradigm Shift in Brain Learning'''
[[Category:1959 births]]

[[Category:Living people]]
In 1949, Donald Hebb's pioneering work suggested that learning occurs in the brain by modifying the strength of the synapses, whereas neurons function as the computational elements in the brain. This has remained the common assumption in neuroscience and initiated the research field of machine learning decades ago.
Using new experiments focused on neuronal cultures, Kanter’s research group revealed a new underlying mechanism for fast (several seconds) brain learning process, dendritic learning<ref>Sardi, S., Vardi, R., Meir, Y., Tugendhaft, Y., Hodassman, S., Goldental, A. & Kanter, I. Brain experiments imply adaptation mechanisms which outperform common AI learning algorithms. Sci Rep-Uk 10, 1-10 (2020).</ref><ref>Sardi, S., Vardi, R., Goldental, A., Sheinin, A., Uzan, H. & Kanter, I. Adaptive nodes enrich nonlinear cooperative learning beyond traditional adaptation by links. Sci Rep-Uk 8, 5100, doi:10.1038/s41598-018-23471-7 (2018).</ref><ref>Sardi, S., Vardi, R., Goldental, A., Tugendhaft, Y., Uzan, H. & Kanter, I. Dendritic learning as a paradigm shift in brain learning. ACS chemical neuroscience 9, 1230-1232 (2018).</ref>. This fast mechanism opposed the previous common belief based solely on slow (tens of minutes) synaptic plasticity. The presented paradigm indicates that fast dendritic learning occurs in closer proximity to the neuron, the computational unit. It presents a new type of abundant cooperative nonlinear dynamics, as effectively all incoming synapses to an updated dendrite concurrently undergo the same adaptation. In addition, dendritic strengths are self-oscillating<ref>Uzan, H., Sardi, S., Goldental, A., Vardi, R. & Kanter, I. Stationary log-normal distribution of weights stems from spontaneous ordering in adaptive node networks. Sci Rep-Uk 8, 13091 (2018).</ref>, and weak synapses, which comprise the majority of our brain and were previously assumed to be insignificant, may play a key role in brain activity.


'''4. The Inverse Problem'''
[[File:Inverse problem picture.png|thumb|Inverse problem scheme (blue).]]
The usual procedure of statistical physics is aimed at the calculation of macroscopic quantities (such as pressure and magnetization) on the basis of model parameters (Hamiltonian). In the inverse problem this procedure is reversed; one seeks for the existence of parameters of a model based on the required statistical properties. Kanter was a pioneer of this field of research. In 1994, he asked the question, "Do classical spin systems with the same metastable states have identical Hamiltonians?"<ref>Kanter, I. & Gotesdyner, R. Do classical spin systems with the same metastable states have identical Hamiltonians? Physical review letters 72, 2678 (1994).</ref>. A year later he generalized this method to neural networks<ref>Kanter, I., Kessler, D., Priel, A. & Eisenstein, E. Analytical study of time series generation by feed-forward networks. Physical review letters 75, 2614 (1995).</ref> and then to the Simplex method<ref>Keren, S., Kfir, H. & Kanter, I. Possible sets of autocorrelations and the simplex algorithm. Journal of Physics A: Mathematical and General 39, 4161 (2006).</ref>. These types of inverse problems opened a new research horizon in statistical mechanics, followed by numerous physical system analyses and interdisciplinary directions.


'''5. Shannon Meets Carnot: Generalized Second Thermodynamic Law'''
[[File:Shannon meets Carnot.jpg|thumb|Upper: A temperature-dependent spring constant at High/Cold temperatures. Lower: Carnot cycle (black), information heat-engine (red).]]
The laws of thermodynamics describe the transport of heat and work in macroscopic processes and play a fundamental role in the physical sciences. In particular, the second thermodynamic law linearly relates the change in the entropy, dS, to the amount of heat, dQ, absorbed by a system at equilibrium, dQ = TdS, thus defining the temperature, T, of the system.
The prior scientific belief was that information theory is primarily a mathematical creature and has its own vitality independent of the physical laws of nature. Kanter, together with his Ph.D student Ori Shental, were pioneers in bridging between Shannon theory and the second law of thermodynamics<ref>Shental, O. & Kanter, I. Shannon meets Carnot: Generalized second thermodynamic law. Europhysics Letters 85, 10006 (2009).</ref> and in fusing together statistical physics and information and communication theories<ref>Peleg, Y., Efraim, H., Shental, O. & Kanter, I. Mutual information via thermodynamics: three different approaches. Journal of Statistical Mechanics: Theory and Experiment 2010, P01014 (2010).</ref>.
In this line of work, Shental and Kanter had generalized the second thermodynamic law to encompass systems with temperature-dependent Hamiltonians and obtained the generalized thermodynamic law <math>dQ = TdS + <\frac{dE}{dT}>dT</math>, where < · > denotes averaging over the Boltzmann distribution. This generalized second law of thermodynamics reveals a new definition to the basic notion of temperature and suggests an information heat engine<ref>Peleg, Y., Efraim, H., Shental, O. & Kanter, I. Mutual information via thermodynamics: three different approaches. Journal of Statistical Mechanics: Theory and Experiment 2010, P01014 (2010).</ref>. Interestingly, it provides a quantitative bridge between the realm of thermodynamics and information theory in the context of communication channels, such as the popular Gaussian and binary symmetric channels. Therefore, purely information-theoretic measures such as entropy, mutual information, and channel capacity, are correctly re-derived from thermodynamics.


'''6. Zero-lag Synchronization and the Greatest Common Divisor of Network Loops'''
[[File:Zero-lag Synchronization and the Greatest Common Divisor of Network Loops.png|thumb|Nodes with the same color are in zero-lag synchronization, GCD=1/5 (top/bottom)]]
The emergence of zero-lag synchronization among distant excitable or chaotic units, without a common input, remained a puzzle for many years. Between 2008-2010, a new non-local mechanism for zero-lag synchronization of a network composed of chaotic units with time-delay couplings was presented by Kanter and Professor Wolfgang Kinzel. The non-local mechanism is the greatest common divisor (GCD) of network's loops<ref>Kanter, I., Kopelowitz, E., Vardi, R., Zigzag, M., Kinzel, W., Abeles, M. & Cohen, D. Nonlocal mechanism for cluster synchronization in neural circuits. Europhysics Letters 93, 66001 (2011).</ref>. For GCD=1 of network's loops, all units are in zero-lag; for GCD>1, the network splits into GCD-clusters in which clustered units are in zero-lag synchronization. These results are supported by simulations of chaotic networks, mixing arguments, and analytical solutions.
The zero-lag synchronization of distant systems and the non-local GCD mechanism were experimentally tested on the reverberating activity patterns embedded in networks of cortical neurons<ref>Vardi, R., Wallach, A., Kopelowitz, E., Abeles, M., Marom, S. & Kanter, I. Synthetic reverberating activity patterns embedded in networks of cortical neurons. Europhysics Letters 97, 66002 (2012).Vardi, R., Wallach, A., Kopelowitz, E., Abeles, M., Marom, S. & Kanter, I. Synthetic reverberating activity patterns embedded in networks of cortical neurons. Europhysics Letters 97, 66002 (2012).</ref><ref>Vardi, R., Timor, R., Marom, S., Abeles, M. & Kanter, I. Synchronization with mismatched synaptic delays: A unique role of elastic neuronal latency. Europhysics Letters 100, 48003 (2012).</ref>, controlling synchronization in large laser networks<ref>Nixon, M., Fridman, M., Ronen, E., Friesem, A. A., Davidson, N. & Kanter, I. Controlling synchronization in large laser networks. Physical review letters 108, 214101 (2012).</ref><ref>Nixon, M., Friedman, M., Ronen, E., Friesem, A. A., Davidson, N. & Kanter, I. Synchronized cluster formation in coupled laser networks. Physical review letters 106, 223901 (2011).</ref>, synchronization in small networks of time-delay coupled chaotic diode lasers<ref>Aviad, Y., Reidler, I., Zigzag, M., Rosenbluh, M. & Kanter, I. Synchronization in small networks of time-delay coupled chaotic diode lasers. Opt Express 20, 4352-4359 (2012).</ref>, and in chaos synchronization in networks of semiconductor superlattices<ref>Li, W., Aviad, Y., Reidler, I., Song, H., Huang, Y., Biermann, K., Rosenbluh, M., Zhang, Y., Grahn, H. T. & Kanter, I. Chaos synchronization in networks of semiconductor superlattices. Europhysics Letters 112, 30007 (2015).</ref>.


'''7. Neural Cryptography'''
[[File:Neural Cryptography.jpg|thumb|Synchronized weights (key) between two parity machines by mutual communication. ]]
A bridge between the synchronization of mutual coupling of two feedforward networks<ref>24</ref> and the generation of symmetric key-exchange protocol over a public channel was established by Kanter and Professor Wolfgang Kinzel in 2001<ref>Kinzel, W. & Kanter, I. in Advances in solid state physics 383-391 (Springer, 2002).</ref>. A passive attacker who knows the protocol and all details of any transmission of the data will find it difficult to reveal the mutual key. Simulations and analytical work indicate that synchronization is superior to tracking by a passive attacker.
This type of key-exchange protocol was extended to the synchronization process of two mutually delayed coupled deterministic chaotic maps<ref>Kanter, I., Kopelowitz, E. & Kinzel, W. Public channel cryptography: chaos synchronization and Hilbert’s tenth problem. Phys Rev Lett 101, 084102 (2008).</ref><ref>Mislovaty, R., Klein, E., Kanter, I. & Kinzel, W. Public channel cryptography by synchronization of neural networks and chaotic maps. Physical review letters 91, 118701 (2003).</ref> and was examined experimentally using two mutual chaotic semiconductor lasers<ref>Klein, E., Gross, N., Rosenbluh, M., Kinzel, W., Khaykovich, L. & Kanter, I. Stable isochronal synchronization of mutually coupled chaotic lasers. Phys Rev E 73, 066214 (2006).</ref><ref>Kanter, I., Gross, N., Klein, E., Kopelowitz, E., Yoskovits, P., Khaykovich, L., Kinzel, W. & Rosenbluh, M. Synchronization of mutually coupled chaotic lasers in the presence of a shutter. Physical review letters 98, 154101 (2007).</ref><ref>Kanter, I., Butkovski, M., Peleg, Y., Zigzag, M., Aviad, Y., Reidler, I., Rosenbluh, M. & Kinzel, W. Synchronization of random bit generators based on coupled chaotic lasers and application to cryptography. Opt Express 18, 18292-18302 (2010).</ref>.
The task of a passive attacker is mapped in some limits onto Hilbert’s tenth problem<ref>Kanter, I., Kopelowitz, E. & Kinzel, W. Public channel cryptography: chaos synchronization and Hilbert’s tenth problem. Phys Rev Lett 101, 084102 (2008).</ref>, solving a set of nonlinear Diophantine equations, which was proven to be in the class of NP-complete problems. This bridge between nonlinear dynamics and NP-complete problems opens a horizon for new types of secure public-channel protocols.


'''8. Physics Assists with Key Challenges in Artificial Intelligence'''
[[File:Physics assists.jpg|thumb]]
From 2020-2023, Kanter's research shed light on the following fundamental theoretical questions regarding deep learning (DL):

(a) Is shallow learning equivalent to DL? Is DL a necessary ingredient for artificial intelligence (AI)?<ref>Meir, Y., Tevet, O., Tzach, Y., Hodassman, S., Gross, R. D. & Kanter, I. Efficient shallow learning as an alternative to deep learning. Scientific Reports 13, 5423 (2023).</ref>

(b) What is the best location of the pooling operators to enhance accuracy?<ref>Meir, Y., Tzach, Y., Gross, R. D., Tevet, O., Vardi, R. & Kanter, I. Enhancing the success rates by performing pooling decisions adjacent to the output layer. arXiv preprint arXiv:2303.05800 (2023).</ref>

(c) Is brain learning, based on tree architectures only, weaker than AI?<ref>Meir, Y., Ben-Noam, I., Tzach, Y., Hodassman, S. & Kanter, I. Learning on tree architectures outperforms a convolutional feedforward network. Scientific Reports 13, 962 (2023).</ref>

(d) Is there a universal law regarding how to efficiently build DL architectures?<ref>Meir, Y., Sardi, S., Hodassman, S., Kisos, K., Ben-Noam, I., Goldental, A. & Kanter, I. Power-law scaling to assist with key challenges in artificial intelligence. Scientific reports 10, 19628 (2020).</ref>

(e) Can error rates follow a universal law as a function of dataset sizes?<ref>Meir, Y., Ben-Noam, I., Tzach, Y., Hodassman, S. & Kanter, I. Learning on tree architectures outperforms a convolutional feedforward network. Scientific Reports 13, 962 (2023).</ref>

(f) What is the mechanism underlying DL?<ref>Tzach, Y., Meir, Y., Tevet, O., Gross, R. D., Hodassman, S., Vardi, R. & Kanter, I. The mechanism underlying successful deep learning. arXiv preprint arXiv:2305.18078 (2023).</ref>

==External Links==
1. Ido kanter's personal website: https://kanterlabsite.wixsite.com/idokanter

2. Ido Kanter's Bar-Ilan website: https://physics.biu.ac.il/en/node/578

3. Selected press releases: https://kanterlabsite.wixsite.com/idokanter/press-articles

4. Movie on “Dendritic learning as an alternative to synaptic plasticity”: https://vimeo.com/702894966

5. Movie on “Unreliable neurons improve brain functionalities and cryptography”: https://vimeo.com/752532666

6. Ido Kanter's Google scholar: https://scholar.google.com/citations?user=0MdAUb0AAAAJ&hl=en

7. Ido Kanter's ResearchGate: https://www.researchgate.net/profile/Ido-Kanter

8. Ido Kanter's Linkdin: https://www.linkedin.com/in/ido-kanter-8448a016/recent-activity/all/


==References==
{{Reflist|30em}}

Latest revision as of 06:48, 29 April 2024

Professor
Ido Kanter
Professor Ido Kanter
Born (1959-11-21) November 21, 1959 (age 64)
Alma materBar-Ilan University
Awards
Weizmann Postdoctoral Fellowship (1988-1989)

Humboldt Senior Research Prize (2001)

Scientific career
Fields
  • Theory of neural networks
  • Physical random number generators
  • Neuroscience in-vitro
  • Deep learning
  • Synchronization of neurons and lasers
  • Neural cryptography
InstitutionsPostdoc: Princeton University, with P. W. Anderson
Doctoral advisorHaim Sompolinsky

Ido Kanter (born: 21 Nov. 1959) is an Israeli professor of physics at and the head of the Lab for Reverberating Modes in Neural Networks at the Gonda Brain Research Center at Bar-Ilan University. He specializes in models of disorder magnetic systems, physical random number generators, theory of neural networks, deep learning and synchronization among neurons and lasers.

Early life and education[edit]

Kanter was born and raised in Rehovot, Israel and served in the Israeli Defense Force from 1978 to 1981.[1]

He attended Bar-Ilan University and graduated with a bachelor's degree in physics and computer science in 1983. In 1987, he received his Ph.D. from Bar-Ilan University. His thesis was Theory of Spin Glasses and its Applications to Complex Problems in Mathematics and Biology, under the supervision of Professor Haim Sompolinsky.[1]

He was a visiting research fellow at Princeton University from 1988 to 1989, working with Phil W. Anderson. He was also a visiting research fellow at AT&T Bell Labs, with Yann le Cun, then 1989 joined the physics department at Bar-Ilan University in 1989.[1]

Research[edit]

Ido Kanter specializes in models of disorder magnetic systems, ultrafast physical random number generators, theory of neural networks, neural cryptography, deep learning and synchronization among neurons and lasers and experimental and theoretical neuroscience, documented in more than 220 publications.[2]

Main contributions[edit]

Dendritic learning as an alternative to synaptic plasticity (with audio)

Using a combination of theoretical and experimental methods,[3] Kanter has made contributions to various fields ranging from statistical physics and communication to neural cryptography and neuroscience.[4] These include work on a field of statistical physics known as the inverse problem,[5] bridging between Shannon theory and the second thermodynamic law,[6] presenting a cryptographic key exchange protocol based on neural networks,[7] and creating an ultrafast non-deterministic random bit generator (RBG).[8]

Kanter is currently focusing on the field of experimental and theoretical neuroscience, Kanter studies a variety of topics including the new neuron,[9] dendritic learning,[10] neural interfaces, and machine learning.[11]

Selected publications[edit]

References[edit]

  1. ^ a b c "About me". Kanter Lab. Download Main Contributions. Retrieved 25 April 2024.
  2. ^ Ido Kanter's Google Scholar profile
  3. ^ "About Me".
  4. ^ "Kanter Ido | Department of Physics".
  5. ^ Kanter, I.; Gotesdyner, R. (1994). "Do classical spin systems with the same metastable states have identical Hamiltonians?". Physical Review Letters. 72 (17): 2678–2681. doi:10.1103/PhysRevLett.72.2678.
  6. ^ Shental, O.; Kanter, I. (2009). "Shannon meets Carnot: Generalized second thermodynamic law". EPL (Europhysics Letters). 85 (1): 10006. arXiv:0806.3763. doi:10.1209/0295-5075/85/10006.
  7. ^ Kanter, Ido; Kopelowitz, Evi; Kinzel, Wolfgang (2008). "Public Channel Cryptography: Chaos Synchronization and Hilbert's Tenth Problem". Physical Review Letters. 101 (8). arXiv:0806.0931. doi:10.1103/PhysRevLett.101.084102.
  8. ^ Kanter, Ido; Aviad, Yaara; Reidler, Igor; Cohen, Elad; Rosenbluh, Michael (2010). "An optical ultrafast random bit generator". Nature Photonics. 4 (1): 58–61. Bibcode:2010NaPho...4...58K. doi:10.1038/nphoton.2009.235.
  9. ^ Sardi, Shira; Vardi, Roni; Sheinin, Anton; Goldental, Amir; Kanter, Ido (2017). "New Types of Experiments Reveal that a Neuron Functions as Multiple Independent Threshold Units". Scientific Reports. 7 (1). Springer Science and Business Media LLC. doi:10.1038/s41598-017-18363-1. PMC 5740076.
  10. ^ Sardi, Shira; Vardi, Roni; Goldental, Amir; Tugendhaft, Yael; Uzan, Herut; Kanter, Ido (2018). "Dendritic Learning as a Paradigm Shift in Brain Learning". ACS Chemical Neuroscience. 9 (6): 1230–1232. doi:10.1021/acschemneuro.8b00204.
  11. ^ "Reverberating Modes in Neural Networks | the Gonda Multidisciplinary Brain Research Center".

External links[edit]

Leave a Reply