Structure and dynamics of coordination of complex systems
The core abilities of autonomous systems rely on multiple interacting subsystems, contributing to common goals with only partially available information. The complexity of such systems, both high-dimensional and non-linear, is a major obstacle to understanding their organization and function. We use computational modeling, machine learning, statistics and causality to address this challenge.
In the case of the mamalian brain, strong recurrent and bidirectional connectivity characterizes the circuits,leading to a complex dynamics where various modules cooperate at different levels to give rise, for example, to coherent behaviors and percepts. This complexity manifests itself as collective oscillations as well as more complex dymamical patterns such as Sharp-wave ripple complexes, that can be observed in electrical brain activity. These neural events are believed to play a key role in information processing, learning and behavior. Our research aims at designing better techniques to detect these events and understand their underlying mechanisms and computational role.
In order to address these questions, we put an emphasis on developing new Machine learning tools with strong theoretical foundations that are particularly well suited to capture the complexity of Biological signals. These include unsupervised learning algorithms to identify relevant patterns in large neural recording datasets [ ], non-parametric statistical tools (based on kernel methods) to identify the complex statistical dependencies of biological signals [ ], as well as causal inference methods to infer the underlying mechanisms generating the data [ ].
Importantly, our work also leads us to use models to investigate the principles underlying learning and plasticity in biological and artificial networks [ ].
In parallel, the current intensive development of new artificial deep neural networks has lead to impressive successes, but the functioning of these architectures remains largely elusive due to their high dimensional connectivity. This provides us an opportunity to use our network analysis tools to uncover fundamental principles for such systems, and possibly relate them to biology. We are currently investigating causality and invariance principles [ ] to understand the structure of deep generative models and in particular assess their modularity [ ].
Analysis of multiscale neural events
Be it during wakefulness or sleep, our brains are able to implement the numerous functions key to our survival with an extraordinary reliability. This implies precise coordination of transient mechanisms at multiple spatio-temporal scales ensuring both the synergy between modules contributing to a same task, and the non-interference between network activities in charge of different functions. Such fine coordination seems at odds with the widespread and largely recurrent anatomical connectivity of the central nervous system, and with the seemingly random fluctuations observed in ongoing brain signals. We postulate that this paradox can be resolved by carefully telling apart the myriad neural activities routed dynamically through this network, each implementing a very specific function. The concurrent study of brain activity at multiple scales using simultaneous recording of action potentials, multi-site Local Field Potential (LFP), and functional Magnetic Resonance Imaging (fMRI) signals offers an exceptional opportunity to investigate the properties and functions of these neural events at a system level. This requires advanced data analysis techniques that fully capture the complexity of these signals including highly transient and non-linear phenomena.
Invariance principles for causal generative models
Generative models encompass most unsupervised learning techniques that aim at building a probabilistic model from data. While the typical measure of success of such techniques is how well the distribution of the model fits the empirical data distribution, we propose that additional causal assumptions could enforce that characteristics of the model should reflect the data generating mechanism. Being able to capture properties of the true causal mechanism allows in principle better generalization and better interpretability of the models. We introduced a general group invariance framework to allow a quantitative assessment of this idea , which connects to several causal inference methods, and currently apply this approach to classical unsupervised learning methods such as clustering, as well as more recent techniques relying on deep generative models.
Proceedings of the 21st International Conference on Artificial Intelligence and Statistics (AISTATS), 84, pages: 557-565, Proceedings of Machine Learning Research, (Editors: Amos Storkey and Fernando Perez-Cruz), PMLR, April 2018 (conference)
In Advances in Neural Information Processing Systems 26, pages: 2535-2543, (Editors: C.J.C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Weinberger), 27th Annual Conference on Neural Information Processing Systems (NIPS), 2013 (inproceedings)
Logothetis, N., Eschenko, O., Murayama, Y., Augath, M., Steudel, T., Evrard, H., Besserve, M., Oeltermann, A.
Nature, 491, pages: 547-553, November 2012 (article)
Hippocampal ripples, episodic high-frequency field-potential oscillations primarily occurring during sleep and calmness, have been described in mice, rats, rabbits, monkeys and humans, and so far they have been associated with retention of previously acquired awake experience. Although hippocampal ripples have been studied in detail using neurophysiological methods, the global effects of ripples on the entire brain remain elusive, primarily owing to a lack of methodologies permitting concurrent hippocampal recordings and whole-brain activity mapping. By combining electrophysiological recordings in hippocampus with ripple-triggered functional magnetic resonance imaging, here we show that most of the cerebral cortex is selectively activated during the ripples, whereas most diencephalic, midbrain and brainstem regions are strongly and consistently inhibited. Analysis of regional temporal response patterns indicates that thalamic activity suppression precedes the hippocampal population burst, which itself is temporally bounded by massive activations of association and primary cortical areas. These findings suggest that during off-line memory consolidation, synergistic thalamocortical activity may be orchestrating a privileged interaction state between hippocampus and cortex by silencing the output of subcortical centres involved in sensory processing or potentially mediating procedural learning. Such a mechanism would cause minimal interference, enabling consolidation of hippocampus-dependent memory.
In Advances in Neural Information Processing Systems 25, pages: 2465-2473, (Editors: P Bartlett and FCN Pereira and CJC. Burges and L Bottou and KQ Weinberger), Curran Associates Inc., 26th Annual Conference on Neural Information Processing Systems (NIPS), 2012 (inproceedings)
Panagiotaropoulos, T., Besserve, M., Crocker, B., Kapoor, V., Tolias, A., Panzeri, S., Logothetis, N.
41(239.15), 41st Annual Meeting of the Society for Neuroscience (Neuroscience), November 2011 (poster)
The inferior convexity of the macaque prefrontal cortex (icPFC) is known to be involved in higher order processing of sensory information mediating stimulus selection, attention and working memory. Until now, the vast majority of electrophysiological investigations of the icPFC employed single electrode recordings. As a result, relatively little is known about the spatiotemporal structure of neuronal activity in this cortical area. Here we study in detail the spatiotemporal properties of local field potentials (LFP's) in the icPFC using multi electrode recordings during anesthesia. We computed the LFP-LFP coherence as a function of frequency for thousands of pairs of simultaneously recorded sites anterior to the arcuate and inferior to the principal sulcus. We observed two distinct peaks of coherent oscillatory activity between approximately 4-10 and 15-25 Hz. We then quantified the instantaneous phase of these frequency bands using the Hilbert transform and found robust phase gradients across recording sites. The dependency of the phase on the spatial location reflects the existence of traveling waves of electrical activity in the icPFC. The dominant axis of these traveling waves roughly followed the ventral-dorsal plane. Preliminary results show that repeated visual stimulation with a 10s movie had no dramatic effect on the spatial structure of the traveling waves. Traveling waves of electrical activity in the icPFC could reflect highly organized cortical processing in this area of prefrontal cortex.
In pages: 2080-2083 , IEEE, Piscataway, NJ, USA, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , May 2011 (inproceedings)
Cross-spectral density (CSD), is widely used to find linear dependency between two real or complex valued time series. We define a non-linear extension of this measure by mapping the time series into two Reproducing Kernel Hilbert Spaces. The dependency is quantified by the Hilbert Schmidt norm of a cross-spectral density operator between these two spaces. We prove that, by choosing a characteristic kernel for the mapping, this quantity detects any pairwise dependency between the time series. Then we provide a fast estimator for the Hilbert-Schmidt norm based on the Fast Fourier Trans form. We demonstrate the interest of this approach to quantify non-linear dependencies between frequency bands of simulated signals and intra-cortical neural recordings.
Decoding experimental conditions from single trial Electroencephalographic (EEG) signals is becoming a major challenge for the study of brain function and real-time applications such as Brain Computer Interface. EEG source reconstruction offers principled ways to estimate the cortical activities from EEG signals. But to what extent it can enhance informative brain signals in single trial has not been addressed in a general setting.
We tested this using the minimum norm estimate solution (MNE) to estimate spectral power and coherence features at the cortical level. With a fast implementation, we computed a support vector machine (SVM) classifier output from these quantities in real-time, without prior on the relevant functional networks. We applied this approach to single trial decoding of ongoing mental imagery tasks using EEG data recorded in 5 subjects.
Our results show that reconstructing the underlying cortical network dynamics significantly outperforms a usual electrode level approach in terms of information transfer and also reduces redundancy between coherence and power features, supporting a decrease of volume conduction effects. Additionally, the classifier coefficients reflect the most informative features of network activity, showing an important contribution of localized motor and sensory brain areas, and of coherence between areas up to 6 cm distance.
This study provides a computationally efficient and interpretable strategy to extract information from functional networks at the cortical level in single trial. Moreover, this sets a general framework to evaluate the performance of EEG source reconstruction methods by their decoding abilities.
Progress in Biophysics and Molecular Biology, 105(1-2):29-33, March 2011 (article)
A central issue of neuroscience is to understand how neural units integrates internal and external signals to create coherent states. Recently, it has been shown that the sensitivity and dynamic range of neural assemblies are optimal at a critical coupling among its elements. Complex architectures of connections seem to play a constructive role on the reliable coordination of neural units. Here we show that, the synchronizability and sensitivity of excitable neural networks can be tuned by diversity in the connections strengths. We illustrate our findings for weighted networks with regular, random and complex topologies. Additional comparisons of real brain networks support previous studies suggesting that heterogeneity in the connectivity may play a constructive role on information processing. These findings provide insights into the relationship between structure and function of neural circuits.
The modern analysis of multivariate electrical brain signals requires advanced statistical tools to automatically extract and quantify their information content. These tools include machine learning techniques and information theory. They are currently used both in basic neuroscience and challenging applications such as brain computer interfaces. We review here how these methods have been used at the Laboratoire d’Électroencéphalographie et de Neurophysiologie Appliquée (LENA) to develop a general tool for the real time analysis of functional brain signals. We then give some perspectives on how these tools can help understanding the biological mechanisms of information processing.
40(616.2), 40th Annual Meeting of the Society for Neuroscience (Neuroscience), November 2010 (poster)
Functional correlates of Rhythms in the gamma band (30-100Hz) are observed in the mammalian brain with a large variety of functional correlates. Nevertheless, their functional role is still debated. One way to disentangle this issue is to go beyond usual correlation analysis and apply causality measures that quantify the directed interactions between the gamma rhythms and other aspects of neural activity. These measures can be further compared with other aspects of neurophysicological signals to find markers of neural interactions.
In a recent study, we analyzed extracellular recordings in the primary visual cortex of 4 anesthetized macaques during the presentation of movie stimuli using a causality measure named Transfer Entropy. We found causal interactions between high frequency gamma rhythms (60-100Hz) recorded in different electrodes, involving in particular their phase, and between the gamma phase and spiking activity quantified by the instantaneous envelope of the MUA band (1-3kHz).
Here, we further investigate in the same dataset the meaning of these phase-MUA and phase-phase causal interactions by studying the distribution of phases at multiple recording sites at lags around the occurrence of spiking events.
First, we found a sharpening of the gamma phase distribution in one electrode when spikes are occurring in other recording site. This phenomena appeared as a form of phase-spike synchronization and was quantified by an information theoretic measure. We found this measure correlates significantly with phase-MUA causal interactions. Additionally, we quantified in a similar way the interplay between spiking and the phase difference between two recording sites (reflecting the well-know concept of phase synchronization). We found that, depending on the couple of recording site, spiking can correlate either with a phase synchronization or with a desynchronization with respect to the baseline. This effect correlates very well with the phase-phase causality measure.
These results provide evidence for high frequency phase-spike synchronization to reflect communication between distant neural populations in V1. Conversely, both phase synchronization or desynchronization may favor neural communication between recording sites. This new result, which contrasts with current hypothesis on the role of phase synchronization, could be interpreted as the presence of inhibitory interactions that are suppressed by desynchronization. Finally, our findings give new insights into the role of gamma rhythms in regulating local computation in the visual cortex.
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems