ESI Systems Neuroscience Conference 2019

The recurrent cortex: feedback, dynamics, and dimensionality

The Ernst Strüngmann Institute Systems Neuroscience Conference (ESI SyNC) 2019 will take place this year on September 5th and 6th at the Ernst Strüngmann Institute in Frankfurt, Germany. This year’s theme is The recurrent cortex: feedback, dynamics, and dimensionality. The conference will feature eleven internationally renowned speakers who will give talks on neural feedback, signal dimensionality, and temporal dynamics. Ample time for both formal and informal discussion has been included in the schedule. There will also be a poster session for which we welcome submissions.

Program

September 5th - Thursday
08:30 - 09:00     Registration and Coffee
09:00 - 09:15     Introduction
09:15 - 10:30     Li Zhaoping
The primary visual cortex as a center stage for vision: bottom-up visual selection and top-down visual recognition
10:30 - 11:00     Coffee Break
11:00 - 12:15     Carina Curto
Dynamically relevant motifs in inhibition-dominated networks
12:15 - 13:30     Peter Dayan
The cortical dynamics of integrative decision making
13:30 - 14:30     Lunch
14:30 - 15:45     Nikolaus Kriegeskorte
Recurrent computation in human visual object recognition
15:45 - 16:15     Coffee Break
16:15 - 17:30     Alessandra Angelucci  
Organization and function of inter-areal feedback connections in early visual processing
17:30 - 19:00     Thomas Fraps
Constructing illusions of impossibility - the role of temporal misdirection in magic performances
19:00 - 21:00     Poster Session and Dinner

September 6th - Friday
09:00 - 09:15     Coffee    
09:15 - 10:30     Tatyana Sharpee
Reading neural responses to avoid information loss
10:30 - 11:00     Coffee Break    
11:00 - 12:15     Jakob Macke    
Teaching neural networks to identify mechanistic models of neural dynamics
12:15 - 13:30     Sacha van Albada    
Large-scale spiking neural network modeling of primate cortical dynamics
13:30 - 14:30     Lunch    
14:30 - 15:45     Fred Wolf    
A major transition in cortical microcircuit structure at the origin of the primate brain
15:45 - 17:15     Closing Discussion    

Registration

Registration is now full! Please send an email to esi-sync (at) esi-frankfurt.de if you would like to be informed about next year’s ESI SyNC.

Please note that the fee for non-ESI and non-MPI Brain Research participants is 100 EUR.

Organizers

Wolf Singer
Pascal Fries
Renata Vajda
William Barnes
Christini Katsanevaki
Katharine Shapcott
Cem Uran
Yiling Yang

Poster Session

The poster board size is 1.8m wide x 1.2m tall. You are free to choose the size of your poster as long as it fits on the board.

  1. Dominik Aschauer: A basis set of elementary operations captures recombination of neocortical cell assemblies during basal conditions and learning
  2. Bastian Eppler: Abrupt transitions of response patterns emerging from gradual changes in connectivity
  3. Michael Bannert: Predictive deep learning explains human BOLD responses during natural viewing
  4. William Barnes: Neuronal dynamics related to figure-ground segregation in V1 of the macaque monkey
  5. Ana Clara Broggini: Optogenetic manipulation of top-down feedback in mouse visual cortex
  6. Anastasia Brovkin: Linking Cytoarchitecture and Connectivity in the Human Brain
  7. Benjamin Dann: Neural population activity does not reflect the network communication structure in monkey frontoparietal areas
  8. Oliver Grimm: Dopaminergic modulation of the reward system and subcortical nuclei in a human pharmaco-fMRI study
  9. Patrick Jendritza: Large-scale recordings and optogenetics in the awake behaving marmoset
  10. Tina Katsanevaki: Attentional modulation of V1 (inhibitory) circuits can mediate selective V1-V4 communication through coherence
  11. Liane Klein: Chronical implantation of a high-density silicon probe in primate visual cortex
  12. Björn Mattes: Magnocellular influence on dynamics in area V1
  13. Alina Peter: Surround modulation of V1 firing and gamma oscillations depends on stimulus chromaticity and predictability
  14. Alina Peter: Repetition of natural images decreases firing rates and can increase gamma synchronization in V1
  15. Irene Onorato: A distinct class of bursting neurons with strong gamma synchronization and stimulus selectivity in macaque V1
  16. Vahid Rostami: Spiking attractor networks with inhibitory clusters can explain context-dependent variability dynamics in monkey motor cortex
  17. Panagiotis Sapountzis: Encoding and Working Memory Maintenance of Feature and Spatial Information in the Parietal and Prefrontal Cortex
  18. Benjamin Stauch: Stimulus learning in human visual gamma-band activity
  19. Mats van Es: Are visual representations phasically activated?
  20. Cem Uran: Natural image statistics and contextual information determine monkey V1 activity
  21. Julien Vezoli: Network properties of frequency-dependent rhythmic synchronizations
  22. Yang Yiling: Transient and persistent: stimulus structure-dependent neural population dynamics in visual area V4

Abstracts

Large-scale spiking neural network modeling of primate cortical dynamics
Sacha van Albada, Group Leader of Theoretical Neuroanatomy, Institute of Neuroscience and Medicine/ Computational and Systems Neuroscience, Forschungszentrum Jülich, Jülich.

Characteristic features of the resting-state dynamics of the cerebral cortex include cell-type-specific firing rates, slow fluctuations expressed in power spectra and in the waxing and waning of activity between groups of correlated areas, and feedback-dominated activity propagation across areas. It is to a large extent unknown how the multi-scale connectivity structure of cerebral cortex shapes this activity. Bringing together a wide range of knowledge on cortical structure into unified models enables simulations that shed light on the relationship between the network structure and the observed resting-state activity.
I present simulations of all vision-related areas in one hemisphere of macaque cortex [1] performed on supercomputers. Each area is represented by a 1 mm2 microcircuit [2], adjusted to the area-specific neuron densities and laminar thicknesses, leading to a total of about 4 million neurons connected via 24 billion synapses. Within each microcircuit, the full density of neurons and synapses is used, preventing distortions due to downscaling [3]. The inter-area connectivity is area-, layer-, and cell-type-specific, and combines axonal tracing data with predictive connectomics [4]. Using a mean-field method, the connectivity is altered slightly to yield plausible average firing rates [5]. Adjusting the cortico-cortical synaptic strengths, slow activity fluctuations emerge just below an instability. In this regime, the power spectrum of the simulated spiking activity of primary visual cortex (V1) and the distribution of spike rates across V1 neurons match parallel spike recordings from macaque. The same parameter settings maximize the correspondence of the functional connectivity with macaque resting-state fMRI. The simulated activity propagates across areas mainly in the feedback direction, akin to experimental findings during sleep and visual imagery.
Our model reconciles microscopic and macroscopic accounts of cortical structure and dynamics. Open-source publication of the code on GitHub along with a workflow based on Snakemake [6] enables others to access and use the model.
Our multi-area model has population-specific connection probabilities but the connectivity is otherwise random. Other models have considered clustering within excitatory populations. I finish by presenting recent work in which we show that joint excitatory-inhibitory clustering accounts better for cortical task-related activity and variability dynamics [7].
Acknowledgements: This work was supported by the European Union (BrainScaleS, grant 269921 and Human Brain Project, grants 604102 and 785907), the Jülich Aachen Research Alliance (JARA), the German Research Council (DFG grants SPP 2041, SFB936/A1,Z1 and TRR169/A2), and computing time grant JINB33.
References:
[1] Potjans TC, Diesmann M (2014) Cereb Cortex 24: 785-806.
[2] Schmidt M, Bakker R, Shen K, Bezgin G, Diesmann M, van Albada SJ (2018) PLOS Comput Biol 14: e1006359.
[3] van Albada SJ, Helias M, Diesmann M (2015) PLOS Comput Biol 11: e1004490.
[4] Schmidt M, Bakker R, Hilgetag CC, Diesmann M, van Albada SJ (2018) Brain Struct Func 223: 1409-1435.
[5] Schuecker J, Schmidt M, van Albada SJ, Diesmann M, Helias M (2017) PLOS Comput Biol 13: e1005179.
[6] Köster J, Rahmann S (2012) Bioinformatics 28: 2520-2522.
[7] Rostami V, Rost T, van Albada SJ, Nawrot M (2018) Bernstein Conference abstract

Organization and function of inter-areal feedback connections in early visual processing
Alessandra Angelucci, Mary H. Boesche Professor of Ophthalmology and Visual Sciences, Moran Eye Institute, University of Utah, Utah.

In the primate visual cortex, information travels along feedforward connections through a hierarchy of areas. Neuronal receptive fields in higher areas become tuned to increasingly complex stimulus features, via convergent feedforward inputs from lower areas. In turn, anatomically prominent feedback connections send information from higher to lower areas. Feedback connections have been implicated in many important functions for vision, including attention, expectation, and visual context, yet their anatomy and function have remained unknown. This is partly due technical difficulties in previous studies of selectively labeling and manipulating the activity of feedback neurons. To overcome these technical limitations, we have used novel viral labeling and optogenetic approaches to investigate the anatomy and function of feedback connections between the secondary (V2) and the primary (V1) visual areas of primates. Anatomically, we find evidence for multiple distinct feedback channels, and for direct, monosynaptic feedback-feedforward loops. Functionally, our results point to a fundamental role of feedback in early visual processing, controlling the spatial resolution of visual signals, by modulating receptive field size, the perceptual sensitivity to image features, by modulating response gain, and contributing to contextual modulation in V1.

Dynamically relevant motifs in inhibition-dominated networks
Carina Curto, Department of Mathematics, The Pennsylvania State University, Pennsylvania.

Many networks in the nervous system possess an abundance of inhibition, which serves to shape and stabilize neural dynamics. The neurons in such networks exhibit intricate patterns of connectivity whose structure controls the allowed patterns of neural activity. In this work, we examine inhibitory threshold-linear networks whose dynamics are constrained by an underlying directed graph. We develop a set of parameter-independent graph rules that enable us to predict features of the dynamics, such as emergent sequences and dynamic attractors, from properties of the graph. These rules provide a direct link between the structure and function of these networks, and may provide new insights into how connectivity shapes dynamics in real neural circuits.

The cortical dynamics of integrative decision making
Peter Dayan, Department for computational Neurosciece, Max Planck Institute for Biological Cybernetics, Tübingen.

Many decisions require complex information to be integrated, forcing the brain to process multiple relevant elements. One way to do this is sequential: dividing the decision process into multiple successive stages that can be executed separately. An important alternative, namely processing elements in parallel, often sacrifices accuracy for speed. We investigated this tradeoff using fine time-scale magnetoencephalographic analysis of cortical representations. We found three sources of individual differences in the temporal structure of the integration process: sequential representation, partial reinstatement of relevant information and early computation, each of which had a dissociable effect on how subjects handled problem complexity and temporal constraints.
Acknowledgements: This is work with Eran Eldar, Gyung Jin Bae, Zeb Kurth-Nelson and Ray Dolan.

Constructing illusions of impossibility - the role of temporal misdirection in magic performances
Thomas Fraps, magician, Munich

In the last decade there has been an renewed interest in the scientific examination of strategies employed by magicians to achieve their seemingly impossible illusions [1,2]. One of the most pervasive psychological methods in magic is misdirection: the magician’s ability to control (visual) attention to cover secret actions. Preventing these actions from entering awareness – an act related to the phenomena of change and inattentional blindness - is crucial in order to create “magic moments” and the experience of the impossible [3]. However, along with targeting visual attention, temporal misdirection and manipulation of subjective time perception is an essential, though less obvious, psychological principle in constructing a magical illusion [4]. One implementation is the insertion of a time interval between the moment of a secret action and the (perceived) moment of magic. This interval acts as a temporal wall preventing the discovery of a causal relationship based on the evaluation of temporal contingencies between events [5]. Thus the perceptual and cognitive coordinate system of the observer is led astray, hiding the true causal chain of events from any attempt at analytical assesment. Further implementations of temporal misdirection are the construction of false causalities [6] and manipulating short-term working memory to erase possible long-term memories of suspicious actions. These temporal tools for constructing illusions of impossibility are discussed in the context of maximising perceptual and cognitive prediction errors in a Bayesian observer [7,8] and are experimentally demonstrated in a live-performance.
References:
[1] Kuhn G, Amlani AA, Rensink RA (2008) Towards a science of magic. Trends in Cognitive Sciences, 12, 349–354
[2] Danek AH, Öllinger M, Fraps T, Grothe B, Flanagin VL, An fMRI investigation of expectation violation in magic tricks , Front. Psychol., 04 February 2015
[3] Kuhn G, Tatler BW (2011) Misdirected by the gap: The relationship between inattentional blindness and attentional misdirection. Consciousness and Cognition, 20(2), 432–436
[4] Fraps T, Time and Magic – Manipulating Subjective Temporality (2014) in Subjective Time - The Philosophy, Psychology, and Neuroscience of Temporality Edited by Valtteri Arstila and Dan Lloyd, MIT-Press
[5] Wagemans J, van Lier R, Scholl BJ, (2006) Introduction to Michotte’s heritage in perception and cognition research. Acta Psychologica, 123, 1–19
[6] Kelley, H. H. (1980). Magic tricks: The management of causal attributions. In Perspectives on Attribution Research and Theory: The Bielefeld Symposium (pp. 19–35)
[7] Brown H, Friston KJ (2012) Free-energy and illusions: The Cornsweet effect. Frontiers in Psychology, 3(43), 1–13.
[8] Geisler WS, Kersten D (2002) Illusions, perception and Bayes. Nature Neuroscience, 5(6), 508–510

Recurrent computation in human visual object recognition
Nikolaus Kriegeskorte, Director of Cognitive Imaging, Zuckerman Institute, Columbia University, New York.

To learn how cognition is implemented in the brain, we must build computational models that can perform cognitive tasks, and test such models with brain and behavioral experiments [1]. Recent advances in neural network modelling have enabled major strides in computer vision and other artificial intelligence applications. This brain-inspired technology provides the basis for tomorrow’s computational neuroscience [1, 2]. Deep convolutional neural nets trained for visual object recognition have internal representational spaces remarkably similar to those of the human and monkey ventral visual pathway [3]. Functional imaging and invasive neuronal recording provide rich brain-activity measurements in humans and animals, but a challenge is to leverage such data to gain insight into the brain’s computational mechanisms [4-6]. We build neural network models of primate vision, inspired by biology and guided by engineering considerations [2, 7, 8]. We also develop statistical inference techniques that enable us to adjudicate among deep neural network models on the basis of brain and behavioral data [4-6]. I will discuss recent work extending deep convolutional feedforward vision models by adding recurrent signal flow. Recurrent networks can recycle their limited neuronal resources to enhance their performance, trading off speed and energy in exchange for higher accuracy [7, 8]. Recurrent convolutional neural networks also provide better accounts of the dynamics of human ventral-stream visual representations, as measured with magnetoencephalography (MEG). Current models still fall short of explaining how humans can so rapidly, robustly, and deeply understand the physical structure and implications of a visual image. However, with the tools of measurement and modeling we now have, the promise of progress is greater than ever.
References:
[1] Cognitive computational neuroscience. Kriegeskorte, N., & Douglas, P. K. (2018). Nature neuroscience, 1.
[2] Deep neural networks: A new framework for modeling biological vision and brain information processing Kriegeskorte N (2015) Annu. Rev. Vis. Sci. 2015. 1:417-46.
[3] Deep Supervised, but Not Unsupervised, Models May Explain IT Cortical Representation Khaligh-Razavi SM, N Kriegeskorte (2014) PLoS Computational Biology 10 (11), e1003915.
[4] Representational models: A common framework for understanding encoding, pattern-component, and representational-similarity analysis Diedrichsen J, Kriegeskorte N (2017) PLoS Computational Biology
[5] Inferring brain-computational mechanisms with models of activity measurements Kriegeskorte N, Diedrichsen J (2016) Philosophical Transactions of the Royal Society B.
[6] Peeling the Onion of Brain Representations Kriegeskorte, N., & Diedrichsen, J. (2019). Annual review of neuroscience, 42, 407-432.
[7] Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition Spoerer CJ, McClure P, Kriegeskorte N (2017) Frontiers.
[8] Recurrent networks can recycle neural resources to flexibly trade speed for accuracy in visual recognition Spoerer, C. J., Kietzmann, T. C., & Kriegeskorte, N. (2019). bioRxiv, 677237.
[9] Recurrence required to capture the dynamic computations of the human ventral visual stream Kietzmann, T. C., Spoerer, C. J., Sörensen, L., Cichy, R. M., Hauk, O., & Kriegeskorte, N. (2019). arXiv preprint arXiv:1903.05946.

Teaching neural networks to identify mechanistic models of neural dynamics
Jakob Macke, Professor for Computational Neuroengineering, Technical University Munich, Munich.

Computational neuroscientists aim to understand the mechanisms underlying neural dynamics and computations. However, building mechanistic models which are quantitatively consistent with empirical measurements (and in particular with the heterogeneous, high-dimensional data which arise in modern experiments in neuroscience) can be challenging and slow.
I will talk about our recent efforts to speed-up and improve this process using machine learning methods, which aim to automatically identify data-consistent models from experimental data. Given a set of experimental data, and prior assumptions about the mechanisms underlying the process that generated it, we want to algorithmically find all those model-parameters which are quantitatively consistent with the data. We will tackle this problem as a statistical inference problem, and teach neural networks to automatically identify data-consistent models, using adaptively chosen model-simulations. I will demonstrate the efficiency and flexibility of this approach, and highlight applications to dynamical models of single neurons, neural populations, as well as biological imaging.

Reading neural responses to avoid information loss
Tatyana Sharpee, Professor of Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla.

Classic studies show that in many species – from leech and cricket to primate – responses of neural populations can be quite successfully read out using a measure neural population activity termed the population vector. However, despite its successes, detailed analyses have shown that the standard population vector discards substantial amounts of information contained in the responses of a neural population, and so is unlikely to accurately describe how signals communicate between parts of the nervous system. I will describe recent theoretical results showing how to modify the population vector expression in order to read out neural responses without information loss, ideally. These results make it possible to quantify the contribution of weakly tuned neurons to perception. I will also discuss numerical methods that can be used to minimize information loss when reading out responses of large neural populations.

Can you see a thought? Neuronal ensembles as emergent units of cortical function
Rafael Yuste, NeuroTechnology Center, Deptartment of Biological Sciences, Columbia University, New York.

The design of neural circuits, with large numbers of neurons interconnected in vast networks, strongly suggest that they are specifically build to generate emergent functional properties [1]. To explore this hypothesis, we have developed two-photon holographic methods to selective image and manipulate the activity of neuronal populations in 3D in vivo [2]. Using them we find that groups of synchronous neurons (neuronal ensembles) dominate the evoked and spontaneous activity of mouse primary visual cortex [3]. Ensembles can be optogenetically imprinted for several days and some of their neurons trigger the entire ensemble [4]. By activating these pattern completion cells in ensembles involved in visual discrimination paradigms, we can bi-directionally alter behavioral choices [5]. Our results are consistent with the possibility that neuronal ensembles are functional building blocks of cortical circuits.
References:
[1] R. Yuste, From the neuron doctrine to neural networks. Nat Rev Neurosci 16, 487-497 (2015).
[2] L. Carrillo-Reid, W. Yang, J. E. Kang Miller, D. S. Peterka, R. Yuste, Imaging and Optically Manipulating Neuronal Ensembles. Annu Rev Biophys, 46: 271-293 (2017).
[3] J. E. Miller, I. Ayzenshtat, L. Carrillo-Reid, R. Yuste, Visual stimuli recruit intrinsically generated cortical ensembles. PNAS 111, E4053-4061 (2014).
[4] L. Carrillo-Reid, W. Yang, Y. Bando, D. S. Peterka, R. Yuste, Imprinting and recalling cortical ensembles. Science 353, 691-694 (2016).
[5] L. Carrillo-Reid, S. Han, W. Yang, A. Akrouh, R. Yuste, Controlling visually-guided behavior by holographic recalling of cortical ensembles. Cell 178, 447-457 (2019).

The primary visual cortex as a center stage for vision: bottom-up visual selection and top-down visual recognition
Li Zhaoping, Head of Sensory and Sensorimotor Systems, Max Planck Institute for Biological Cybernetics, Tübingen.

Visual attention selects only a tiny fraction of visual input information for further processing. Selection starts in the primary visual cortex (V1), which (via lateral/recurrent mechanisms within V1) creates a bottom-up saliency map to guide the fovea to selected visual locations via gaze shifts (via V1’s projection to the superior colliculus). It suggests a massive loss of non-selected information from V1 downstream along the visual pathway. Hence, feedback from downstream visual cortical areas to V1 for better recognition, though analysis-by-synthesis, should query for additional information and be mainly directed at the foveal region. Hence, peripheral vision is mainly for looking (to select a peripheral location to attend or shift gaze to), while central vision is mainly for seeing (or visual decoding). Accordingly, non-foveal vision is not only poorer in spatial resolution, but also more susceptible to various problems such as illusions and crowding. Therefore, V1 is a center-stage for both the bottom-up and top-down processes, rather than an early visual cortex. Visual processes beyond V1 should be understood in light of V1’s role. Relevant behavioral, physiological, anatomical, and fMRI data will be presented to illustrate the above.
Additional information is available here

Venue

The conference takes place at the Ernst Strüngmann Institute, which is conveniently located in the Niederrad neighbourhood of Frankfurt.

Google Maps OpenStreetMap

Address
Ernst Strüngmann Institute
Deutschordenstraße 46
60528 Frankfurt
Germany

See here for directions to the ESI and for accommodation suggestions near the institute or the city center.

Contact us

esi-sync (at) esi-frankfurt.de

Past meetings