Overview of Projects
Principal investigator(s): Sophie Denève, Nabil Bouaouli and Timm Lochmann
We seek to build new models of neural processing that parallel the explicit, neural space, consisting of spikes, synapses, neurons, and neural circuits, and an implicit probability space that implements Bayesian learning and inference in an underlying statistical model.
As a starting point, we propose to view cortical neurons as accumulating evidence over time about a particular hypothesis regarding the state of the environment, the body or the task. This hypothesis could be the presence or absence of a preferred stimulus in this neuron's receptive field, the appropriateness of a particular movement, or a more abstract component in a combinatorial code, a "hidden variable" describing an underlying statistical structure in sensory data. However, rather than computing directly an answer to their questions (is the hypothesis true or false?), neurons compute and communicate to other neurons their certainty about this hypothesis being true or false.
Because the state of these variables are likely to change over time, and the main difficulty is to correctly detect these changes, spikes, and more generally neural activity, signals the occurrence of new probabilistic evidence that could be not predicted from previous spikes fired by the same neuron pr other neurons in the same population. In support of this hypothesis, neural responses to salient and unpredictable events, or to errors of predictions, tend to be much stronger than responses to static, boring stimuli.
In particular, our current research suggests that a neuron involved in such probabilistic computations would have the dynamics of a leaky integrate and fire neuron with adaptation (spike-based, synaptic) and plasticity (spike time dependant) close to those observed in cortical neurons.
Denève, S., Bayesian Spiking Neurons I: Inference,
Neural Computation, 20, 91-117 (2008).
Denève, S., Bayesian Spiking Neurons II: Learning,
Neural Computation, 20, 118-145 (2008).
Mongillo, G. and Denève, S., Online Learning with Hidden Markov Models,
Neural Computation, 20(7), 1706-16 (2008).
Lochmann, T. and Denève, S., Information transmission with spiking Bayesian neurons,
New Journal of Physics, 10, article ID: 055019 (2008).
Principal investigator(s): Boris Gutkin, Andrew M. Oster, (Michael Graupner)
Addiction presents a complex behavioral process whose causes can be postulated on a multiplicity of levels, from molecular and pharmacological to cognitive. Computational approaches to addiction should bridge the neural with the behavioural/cognitive phenomena. Our approach is multi-level . We are striving to synthesize the effects of the drug on the receptor level, neural circuit level and decision making level in order to disentangle the roles of the primary rewarding and hedonic processes of the drug and the opponent processes in the progression from use to addiction.
We focus on nicotine addiction with the overall hypothesis being that self-administration of nicotine results from an abnormally biased learning at the level of the reward and action selection neural circuits. This initial self-administration, when prolonged, leads to the persistent addiction, lasting possibly the life time of the animal/person. The model is developed to reflect the neuroadaptations induced by the drug (nicotine) at the circuit and receptor level, and to marry those with computational models of reinforcement learning.
In the initial stage of the project we have proposed a minimal model of nicotine addiction under a simple choice paradigm. We frame the model in terms of actor-critic framework, proposing specific anatomical substrates for both and suggesting a differential drug action. We take into account the changes to neuronal activity induced directly by the drug, in short- and long- term. Such neuroadaptation then lead to pathological action learning, resulting in addictive behavior. The minimal model accounts for a number of experimental findings.
This minimal model forms an initial framework for a development of a more comprehensive computational theory of drug addiction. We plan to extend the model to address more complex behavioral situations and to tease apart how the various learning processes are affected by the drug (e.g. learning of reward vs. learning of actions) leading to habitual drug taking. We also plan to consider the role of the prefrontal cortex, in particular executive cognitive control by such over the reward- and action-selection circuits, in addiction.
Keramati, M. and Gutkin, B.S., Imbalanced decision hierarchy in addicts emerging from drug-hijacked dopamine spiraling circuit,
PLOS One, 8:4, 1-8 (2013).
Tolu, S., Eddine, R., Marti, F., David, V., Graupner, M., Baudonnat, S.P.M., Besson, M., Reperant, C., Zemdegs, J., Pagès, C., Caboche, J., Gutkin, B., Gardier, A.M., Changeux, J., Faure, P., and Maskos, U., Co-activation of VTA DA and GABA neurons mediates nicotine reinforcement.,
Molecular Psychiatry, in press, (2012).
Zhang, D., Gao, M., Xu, D., Shi, W., Gutkin, B., Steffensen, S., Lukas, R., and Wu, J., Impact of prefrontal cortex in nicotine-induced excitation of VTA dopamine neurons in anesthetized rats,
Journal of Neuroscience, in press, (2012).
Keramati, M., Dezfouli, A., and Piray, P., Understanding Addiction as a Pathological State of Multiple Decision Making Processes: A Neurocomputational Perspective,
in: Computational Neuroscience of Drug Addiction, eds. Gutkin, B. and Ahmed, S., (2011).
Keramati, M. and Gutkin, B.S., A Reinforcement Learning Theory for Homeostatic Regulation,
Oster, A. and Gutkin, B.S., A reduced model of DA neuronal dynamics that displays quiescence, tonic ﬁring and bursting.,
J Phyisiol (Paris), in press, (2011).
Gutkin, B.S. and Ahmed, S.H., Computational Neuroscience of Drug Addiction,
in: , Springer Series in Computational Neuroscience, Springer Verlag, 10 DOI: 10.1007/978-1-4614-0751-5, (2011).
Graupner, M. and Gutkin, B.S., Modelling Local Circuit Mechanisms for Nicotine Control of Dopamine Activity,
in: Computational Neuroscience of Drug Addiction, eds. Gutkin, B.S. and Ahmed, S.H., Computational Neuroscience Series, Springer Verlag, 10, 111-144 (2011).
Piray, P., Keramati, M., Dezfouli, A., Lucas, C., and Mokri, A., Individual Differences in Nucleus Accumbens Dopamine Receptors Predict Development of Addiction-like Behavior: A Computational Approach,
Neural Computation, 22, 2334-2368 (2010).
Graupner, M. and Gutkin, B., Modeling nicotinic neuromodulation from global functional and
network levels to nAChR based mechanisms,
Acta Pharmacol Sin, 30(6), 681–6 (2009).
Ahmed, S.H., Graupner, M., and Gutkin, B., Computational Approaches to the Neurobiology of
Pharmacopsychiatry, 42(Suppl. 1), S144-S152 (2009).
Dezfouli, A., Piray, P., Keramati, M., Ekhtiari, H., Lucas, C., and Mokri, A., A Neurocomputational Model for Cocaine Addiction,
Neural Computation, 21, 2869-2893 (2009).
Ahmed, S., Bobashev, G., and Gutkin, B.S., The simulation of addiction: pharmacological and neurocomputational models of drug self-administration,
Drug Alcohol Depend, 90(2-3), 304-11 (2007).
Bobashev, G., Costenbader, E., and Gutkin, B.S., Comprehensive mathematical modeling in drug addiction sciences,
Drug Alcohol Depend, 89(1), 102-6 (2007).
Gutkin, B.S., Dehaene, S., and Changeux, J.P., A neurocomputational hypothesis for nicotine addiction,
Proc. Natl. Acad. Sci., 103 (4), 1106-1111 (2006).
Principal investigator(s): Sophie Deneve and Renaud Jardri
Recent advances in theoretical neuroscience have provided new insights into information processing within large brain-like networks operating in an uncertain world. The computational framework can overcome some of the complexity within the object of study by predicting how basic changes in neural architecture may lead to systems-level changes that translate into changes in behavior. Computational models offer ways to unify basic neurochemical findings with data from more macroscopic levels and to start to apply these findings to cognitive sciences and psychiatry. We are currently developping a theory on how impaired inhibition in hierarchical neural could cause false perceptions. In collaboration with R. Jardri at CHU Lille, we test the predictions of this model with psychophysics tasks in schizophrenic patients and controls.
Jardri, R. and Deneve, S., Computational models of hallucinations.,
The Neuroscience of Hallucinations, (2012).
Principal investigator(s): Sophie Deneve, Christian Machens, David Barrett and Ralph Bourdoukan
Neural networks compute with dynamic sensory and motor variables in a continually changing world. Here we show that networks of integrate-and-fire neurons can implement arbitrary linear dynamical systems by encoding ``prediction errors'' with their spikes. Our network model is derived from purely functional principles, yet naturally accounts for two puzzling aspects of cortex. First, it provides a rationale for the tight balance and correlations between excitation and inhibition. Second, it predicts asynchronous and irregular firing as a consequence of predictive population coding, even in the limit of vanishing noise. We show that our spiking networks have error-correcting properties that make them far more accurate and robust than comparable rate models. Our approach suggests that spike times do matter when considering how the brain computes, and that the reliability of cortical representations could have been strongly under-estimated.
Principal investigator(s): Sophie Deneve
To make fast and accurate behavioural choices, we need to integrate the noisy sensory input, take into account prior knowledge, and adjust our decision criteria. It was shown previously that in a two alternative forced choice tasks, optimal decision making can be formalized in the framework of a sequential probability ratio test and is then equivalent to a diffusion model. However, this analogy hides a “chicken and egg” problem: To know how quickly we should integrate the sensory input and to set the optimal decision threshold, the reliability of the sensory observations has to be known in advance. Most of the time, we cannot know this reliability without first observing the decision outcome. We consider here a Bayesian decision model simultaneously inferring the probability of two different choice alternatives and estimating at the same time the reliability of the sensory information on which this choice is based. We show that this can be done within a single trial, based on the noisy responses of sensory spiking neurons. The resulting model is a non-linear diffusion to bound where the weight of the sensory inputs and the decision threshold are both dynamically changing over time. In difficult decision trials, sensory inputs early in the trial have a stronger impact on the decision, and the threshold collapse such that choices are made faster but with low accuracy. The reverse is true in easy trial: the sensory weight and the threshold increase over time, leading to slower decisions but at much higher accuracy. In contrast to standard diffusion model, adaptive sensory weights construct an accurate representation for the probability of each choice. This information can thus be appropriately combined with other unreliable cues, such as priors. We show that this model can account for recent findings in a motion discrimination task, and can be implemented in a neural architecture by fast Hebbian learning.
Principal investigator(s): Sophie Denève, Pierre Baraduc, and Pierre Morel
The capacity of the nervous system to update a motor command by a comparison between the sensory feedback and the efferent copy is well established (Wolpert et al, 1995). This requires the existence of an internal model for the state of the body, predicted from the efferent motor commands, and corrected by the sensory feedback. The relative strength of the feedback and the efferent copy ideally depends on the sensory and motor noise, and can be computed using a Kalman filter.
In collaboration with Pierre Baraduc in the Institute of Cognitive Science in Lyon, France, we are currently testing the prediction of these models for the control of eye movements and arm movements, particularly whether the gains attributed to the visual feedback movements and sequences of movements is adjusted according to the motor noise. In particular we seek new methods to measure the subject's uncertainty about their on-line effector location estimate during the movement, as opposed to only at the end of the movement.
Morel, P., Deneve, S., and Baraduc, P., Optimal and suboptimal use of postsaccadic vision in sequences of saccades.,
Journal of Neuroscience, 31(27), 10039-49 (2011).
Munuera, J., Morel, P., Duhamel, J., and Denève, S., Optimal sensorimotor control in eye movement sequences,
Journal of Neuroscience, 29, 3026-35 (2009).
Principal investigator(s): Michiel Remme, Boris Gutkin, Mate Lengyel
A current project of mine is concerned with how synaptic input is integrated in active dendritic trees. The classic picture of summation of synaptic input in passive dendritic cables has been strongly challenged over the last decade. Many experimental studies have shown that dendrites are endowed with a wide variety of voltage-dependent ion channels. Particularly interesting is that inputs to the dendrites are able to locally generate spikes. These dendritic spikes can propagate to the soma and lead to a somatic action potential.
Obviously these properties greatly influence the integrative properties of the neuron. In a collaboration with Boris Gutkin and Mate Lengyel I'm currently studying how synaptic inputs to active dendrites interact and what kind of computations can be performed.
Caze, R.D., Humphries, M., and Gutkin, B.S., Passive Dendrites Enable Single Neurons to Compute Linearly Non-separable Functions,
PLOS Computational Biology, 9(2): e1002867, (2013).
Caze, R., Humphries, M., and Gutkin, B.S., Spiking and saturating dendrites differentially expand single neuron computation capacity,
Advances in Neural Information Processing 2012, in press, (2012).
Jedlicka, P., Deller, T., Gutkin, B.S., and Backus, K.H., Activity-Dependent Intracellular Chloride Accumulation and Diffusion Controls GABAA Receptor-Mediated Synaptic Transmission,
Remme, M., Lengyel, M., and Gutkin, B.S., Democracy-Independence Trade-Off in Oscillating Dendrites and Its Implications for Grid Cells,
Neuron, 66, 429-437 (2010).
Remme, M., Lengyel, M., and Gutkin, B.S., The role of ongoing dendritic oscillations in single-neuron dynamics,
PLOS Comput. Biol., 5(9), e1000493 (2009).
Principal investigator(s): Pierre Morel and Sophie Denève
Several behavioral experiments suggest that the nervous system use an internal model of the dynamics of the body to implement a close approximation to a Bayesian filter. This filter can be used to perform a variety of tasks near optimally such as predicting the sensory consequence of motor action, integrating sensory and body posture signals, and computing motor commands. We propose that the neural implementation of this Bayesian filter involves recurrent basis function networks with attractor dynamics, a kind of architecture that can be readily mapped onto cortical circuits. In such networks, the tuning curves to variables such as arm velocity are remarkably non invariant in the sense that the amplitude and width of the tuning curves of a given neuron can vary greatly depending on other variables such as the position of the arm, or the reliability of the sensory feedback. This property could explain some puzzling properties of tuning curves in the motor and premotor cortex, and leads to several new predictions.
Principal investigator(s): Sophie Denève, Timm Lochmann, Matthew Chalk and Udo Ernst
A striking analogy exists between the Bayesian networks (graphical models) that are used in machine learning, and recurrent neural networks, that also contain nodes (neurons or neural population) links (axons and synapses) and multidirectional propagation of messages (spikes in the case to neural network, beliefs in the case of Bayesian network). In particular, we can make a parallel between the factorization of the joint probability distribution, the nodes in the graphical model that represents the statistical structure of the perceptual and motor environment, and the modular structure of the brain that implement this graphical model.
Our major objective is neurons as building blocks in a new theory of cortical computations, where neural networks implement an underlying, hierarchical statistical model, and where the multidirectional flow of information within cortical networks is interpreted as a propagation of beliefs allowing each neuron to compute the probability of its hypothesis to be true given evidence received in the entire brain. More generally, we propose to show that networks of biophysical spiking neurons approximate Bayesian inference by a local message passing algorithm called belief propagation in a corresponding Bayesian network.
Lochmann, T. and Deneve, S., Neural processing as causal inference.,
Current Opinion in Neurobiology, 21(5), 774-81 (2011).
Lochmann, T. and Deneve, S., Optimal cue combination predict contextual effects on sensory neural responses.,
Sensory Cue Integration, (2011).
Principal investigator(s): Boris Gutkin, Martin Krupa (INRIA), Anatoly Buchin, Alexandre Hyafil, Lorenzo Fontolan (U Geneva)
For the analysis of single neurons we take the minimal modeling approach, which attempts to construct the simplest structural model for explaining an observed phenomenon, using mechanisms based on known physiological processes. The goal of this approach is to create models that display non-trivial behavior, provide experimentally testable predictions, and are amenable to analysis. A central interest is developing mathematical methods that produce minimal models. In other words models that are mathematically tractable, and yet reflect the very heart of the neural mechanism that underlies the neural or cognitive phenomenon. This, slightly abstracted and mathematical approach can produce not only individual models, but also a theory with a wider brush stroke, and hopefully allow one to unify previously disparate data.
Recently we turned to modelling network activity during gamma oscillations, their coupling with the theta rhythm and also development of epileptic seizures (in collaboration with R Miles).
For example, the transition from evoked to repetitive firing in many conductance-based neural models is produced by a specific dynamical mechanism: the saddle-node bifurcation, seen when the associated phase-space diagrams are studied in the context of non-linear dynamical systems. This is known as "Type I" membrane excitability, as defined by A. Hogdkin in his seminal work from 1948. By using normal form reduction a simple canonical one-equation model for this dynamical class can be derived and used to make qualitative statements generic to the whole class. Thus by studying this canonical model we are studying behavior of a wide class of excitable membranes and thus - neurons.
Neurons spike generating dynamics can be characterized by its Phase Response Function - a measure of how timing of individual spikes is shifted by weak transient inputs (e.g. single EPSP and/or IPSP). We are building upon the canonical theory of spike generation to predict the shapes of PRCs in cortical neurons and link these with other standard measures of neural response.
We have derived an extended reduced model that includes slow adaptation (spike frequency adaptation). We showed how such adaptation can shape the bifurcations underlying the generation of action potentials. Such adaptive processes, linked with a family of slow potassium currents, are under exquisite control of neuromdulators such as achetylcholine and dopamen. We are currently studying experimentally the effect of such modulators on the structure of the spike generating dynamics in cortical neurons.
We have identified that neurons can respond to transient excitatory inputs either with high precision, independent of the spike emission probability, or with variable (arbitrary) delays. In the second case the distribution of spike times depends on the spike emission probability. We are now identifying the mechanisms underlying this and studying its consequences for coding.
We are further exploring the consequence of spike generating dynamics and adaptation on the computation performed by the neurons in the context of Bayesian information processing.
Muller, L., Brette, R., and Gutkin, B.S., Spike-timing dependent plasticity and feed-forward input
oscillations produce precise and invariant spike
Frontiers in Neuroscience, in press, (2011).
Stiefel, K.M., Gutkin, B.S., and Sejnowski, T.E., The effects of cholinergic neuromodulation on neuronal phase-response curves of modeled cortical neurons,
J Comput Neurosci, 29.2, 289-301 (2009).
Gutkin, B.S., Tuckwell, H., and Jost, J., Random perturbations of spiking activity in a pair of coupled neurons,
Theory in the Biosciences, (in press), (2008).
Stiefel, K.M., Gutkin, B., and TE, T.E.S., Cholinergic modulation of dynamics underlying spike generation in cortical neurons,
PLoS ONE, 3(12), e3947 (2008).
Jeong, H.Y. and Gutkin, B.S., Synchrony of neuronal oscillations controlled by GABAergic reversal potentials,
Neural Computation, 19 (3), 706-729 (2007).
Brumberg, J.C. and Gutkin, B.S., Cortical pyramidal cells as non-linear oscillators: Experiment and spike-generation theory,
Brain Research, 1171, 122-137 (2007).
Gutkin, B.S. and Ermentrout, G.B., Neuroscience: spikes too kinky in the cortex?,
Nature, 440 (7087), 999-1000 (2006).
Gutkin, B.S., Ermentrout, G.B., and Reyes, A.D., Phase-response curves give the responses of neurons to transient inputs.,
Journal of Neurophysiology, 94, 1623-1635 (2005).
Stiefel, K.M., Wespatat, V., Gutkin, B.S., Tennigkeit, F., and Singer, W., Phase dependent sign changes of GABAergic synaptic input explored in vitro and in computo,
Journal of Computational Neuroscience, 19 (1), 71-85 (2005).
Gutkin, B.S., Ermentrout, G.B., and Rudolph, M., Spike generating dynamics and the conditions for spike-time precision in cortical neurons,
Journal of Computational Neuroscience, 15, 91-103 (2003).
Ermentrout, G.B., Pascal, M., and Gutkin, B.S., The Effects of Spike Frequency Adaptation and Negative Feedback on the Synchronization of Neural Oscillators,
Neural Computation, 13, 1285-1310 (2001).
Gutkin, B.S. and Ermentrout, G.B., Dynamics of Membrane Excitability Determine Interspike Interval Variability: A Link between Spike Generation Mechanisms and Cortical Spike Train Statistics,
Neural Computation, 10 (5), 1047-1065 (1998).
Principal investigator(s): Boris Gutkin, Mario Dipoppa (UCL)
|Dopaminergic Modulation of Working Memory Circuit.
Upper: General Model Network
Diagram Lower: Transfer Function of the BG Units - DA induces bistability.
Gruber et al 2006.|
We have studied the mechanisms of working memory formation. This is the kind of memory where information is held actively on-line for use in generating cognition-guided action and behavior. Neurons in several cortical areas, and particularly in the dorsolateral prefrontal cortex of primates show sustained activity that is a neural representation of the working memory trace. We are exploring the dynamical mechanisms underlying this activity, focusing on the structure of spiking in such circuits and the influence of neuromodulation.
The underlying theoretical models for this activity are the so-called 'bump' attractors or sustained foci of activity that are spatially and temporally stable. We have considered the relationship between the structure of spike-timing within a bump attractor and its stability. We have shown an "inhibitory" effect of synchronizing excitation, that is sustained activity stops when the active neurons are synchronized. This is independent of the neuronal model used and also independent of recurrent inhibition. We have suggested that this could be an efficient way to update the representations in working memory.
We are studying the influence of noise on the stability and the onset of sustained activity in spiking neural circuits. We have shown that random excitatory synaptic inputs can disrupt sustained activity in a simple, purely excitatory circuit. We are presently analyzing the mechanism for this and studying the consequences for neural phenomena as up-down states.
We have studied bump formation and stability in networks with patchy long-range lateral connections: excitatory "lattices". We have also found that patchy connections can play a role in sustaining multiple bumps by proving that these stabilize the coexistence of multiple bumps, hence traces of multiple memories.
We have studied the influence of basal ganglia on the stability of bump attractors. We have specifically examined the role of dopamine in modulating the relative interactions between the prefrontal cortex and the basal ganglia, suggesting that dopaminergic modulation enables contextual control of memory store access.
We have recently studied the influence of coherent oscillations and backgrond noise correlations on the stbility of persistent activity. Our models show that both change the memory state from an attractor to a slow transient, with the characteristic life times dependent on the statistics of the background activity and the parameters of the oscillations. We showed that different oscillatiory frequency bands, at equal coherence, have a differential effect on the ability of transient stimuli to activate the memory state and an apriori memory state to be deactivated. Using this fact we show how flexible shifts in oscillatory frequency content can ensure the various dynamical and gating regimes to implement whole delay response tasks. Surprisingly we found that theta-band oscillations ensure robust memory maintenance in the face of irrelevant distractors and alpha-band clears the memory and ensures only transient responses to transient stimuli.
DiPoppa, M., Krupa, M., Torcini, A., and Gutkin, B., Marginally Stable States and Quasi-periodic minor attractors in excitable pulse-coupled networks,
SIAM Journal of Appllied Dynamical Systems, 11, 864–894 (2012).
Gutkin, B.S., Tuckwell, H., and Jost, J., The Phenomenon of Inverse Stochastic Resonance,
Naturwissenschaft, DOI 10.1007/s00114-009-0570-5, (2009).
Gutkin, B.S., Tuckwell, H., and Jost, J., Transient termination of synaptically sustained firing by noise,
Euro Physics Letters, 81, 20005 (2008).
Mongillo, G., Barak, O., and Tsodyks, M., Synaptic theory of working memory,
Science, 319, 1543-1546 (2008).
Gruber, A.J., Dayan, P., Gutkin, B.S., and Solla, S.A., Dopamine modulation in the basal ganglia locks the gate to working memory,
Journal of Computational Neuroscience, 20 (2), 153-166 (2006).
Gutkin, B.S., Jost, J., and Hely, T., Noise Delays Onset of Sustained Firing in a Positively Coupled Neural Circuit,
Neurocomputing, 58-60, 753-760 (2004).
Laing, C.R., Troy, W.C., Gutkin, B.S., and Ermentrout, G.B., Multiple Bumps in a Neuronal Model of Working Memory,
SIAM Journal of Applied Mathematics, 63 (1), 62-97 (2002).
Gutkin, B.S., Laing, C.R., Colby, C., Chow, C.C., and Ermentrout, G.B., Turning On and Off with Excitation,
J. Computational Neuroscience, 11:2, 121-134 (2001).