Abstract
We develop a phenomenological coarse-graining procedure for activity in a large network of neurons, and apply this to recordings from a population of 1000+ cells in the hippocampus. Distributions of coarse-grained variables seem to approach a fixed non-Gaussian form, and we see evidence of scaling in both static and dynamic quantities. These results suggest that the collective behavior of the network is described by a nontrivial fixed point.
Original language | English (US) |
---|---|
Article number | 178103 |
Journal | Physical review letters |
Volume | 123 |
Issue number | 17 |
DOIs | |
State | Published - Oct 23 2019 |
All Science Journal Classification (ASJC) codes
- Physics and Astronomy(all)
Access to Document
Fingerprint Dive into the research topics of 'Coarse graining, fixed points, and scaling in a large population of neurons'. Together they form a unique fingerprint.
Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver
}
Coarse graining, fixed points, and scaling in a large population of neurons. / Meshulam, Leenoy; Gauthier, Jeffrey L.; Brody, Carlos D.; Tank, David W.; Bialek, William.
In: Physical review letters, Vol. 123, No. 17, 178103, 23.10.2019.Research output: Contribution to journal › Article › peer-review
TY - JOUR
T1 - Coarse graining, fixed points, and scaling in a large population of neurons
AU - Meshulam, Leenoy
AU - Gauthier, Jeffrey L.
AU - Brody, Carlos D.
AU - Tank, David W.
AU - Bialek, William
N1 - Funding Information: https://orcid.org/0000-0001-6251-4873 Meshulam Leenoy 1,2,3 Gauthier Jeffrey L. 1 Brody Carlos D. 1,4,5 Tank David W. 1,2,4 Bialek William 2,3,6 Princeton Neuroscience Institute, 1 Princeton University , Princeton, New Jersey 08544, USA Joseph Henry Laboratories of Physics, 2 Princeton University , Princeton, New Jersey 08544, USA Lewis-Sigler Institute for Integrative Genomics, 3 Princeton University , Princeton, New Jersey 08544, USA Department of Molecular Biology, 4 Princeton University , Princeton, New Jersey 08544, USA Howard Hughes Medical Institute, 5 Princeton University , Princeton, New Jersey 08544, USA 6 Initiative for the Theoretical Sciences , The Graduate Center, City University of New York , 365 Fifth Ave., New York, New York 10016, USA 23 October 2019 25 October 2019 123 17 178103 1 August 2019 11 October 2018 © 2019 American Physical Society 2019 American Physical Society We develop a phenomenological coarse-graining procedure for activity in a large network of neurons, and apply this to recordings from a population of 1000+ cells in the hippocampus. Distributions of coarse-grained variables seem to approach a fixed non-Gaussian form, and we see evidence of scaling in both static and dynamic quantities. These results suggest that the collective behavior of the network is described by a nontrivial fixed point. National Science Foundation 10.13039/100000001 PHY–1734030 PHY–1607612 Center for the Science of Information CCF–0939370 National Institutes of Health 10.13039/100000002 1R01EB026943–01 Howard Hughes Medical Institute 10.13039/100000011 In systems with many degrees of freedom, it is natural to search for simplified, coarse-grained descriptions; our modern understanding of this idea is based on the renormalization group (RG). In its conventional formulation, we start with the joint probability distribution for variables defined at the microscopic scale, and then coarse grain by local averaging over small neighborhoods in space. The joint distribution of coarse-grained variables evolves as we change the averaging scale, and in most cases the distribution becomes simpler at larger scales: macroscopic behaviors are simpler and more universal than their microscopic mechanisms [1–4] . Is it possible that simplification in the spirit of the RG will succeed in the more complex context of biological systems? The exploration of the brain has been revolutionized over the past decade by methods to record, simultaneously, the electrical activity of large numbers of neurons [5–15] . Here we analyze experiments on 1000+ neurons in the CA1 region of the mouse hippocampus. The mice are genetically engineered to express a protein whose fluorescence depends on the calcium concentration, which in turn follows electrical activity; fluorescence is measured with a scanning two-photon microscope as the mouse runs along a virtual linear track. Figure 1(a) shows a schematic of the experiment, described more fully in Ref. [15] . The field of view is 0.5 × 0.5 mm 2 [Fig. 1(b) ], and we identify 1485 cells that were monitored for 39 min, which included 112 runs along the virtual track. Images are sampled at 30 Hz, segmented to assign signals to individual neurons, and denoised to reveal transient activity above a background of silence [Fig. 2(a) ]. 1 10.1103/PhysRevLett.123.178103.f1 FIG. 1. (a) Schematic of the experiment, imaging inside the brain of a mouse running on a styrofoam ball. Motion of the ball advances the position of a virtual world projected on a surrounding toroidal screen. (b) Fluorescence image of neurons in the hippocampus expressing calcium sensitive fluorescent protein. 2 10.1103/PhysRevLett.123.178103.f2 FIG. 2. Fluorscence signals, denoising, and coarse graining. (a) Continuous fluorescence signals, raw in grey and denoised in black, for three neurons in our field of view. (b) Activity of eight example neurons. Maximally correlated pairs are grouped together by summing their activity, normalizing so the mean of nonzero values is one. Each cell can only participate in one pair, and all cells are grouped by the end of each iteration. Darker arrows correspond to stronger correlations in the pair. In familiar applications of the RG, microscopic variables have defined locations in space, and interactions are local, so it makes sense to average over spatial neighborhoods. Neurons are extended objects, and make synaptic connections across distances comparable to our entire field of view, so locality is not a useful guide. But in systems with local interactions, microscopic variables are most strongly correlated with the near spatial neighbors. We will thus use correlation itself, rather than physical connectivity, as a proxy for neighborhood. We compute the correlation matrix of all the variables, search greedily for the most correlated pairs, and define a coarse-grained variable by the sum of the two microscopic variables in the pair [16] , as illustrated in Fig. 2 . This can be iterated, placing the variables onto a binary tree; alternatively, after k iterations, we have grouped the neurons into clusters of size K = 2 k , and each cluster is represented by a single coarse-grained variable. We emphasize that this is only one of many possible coarse-graining schemes [17] . A technical point concerns the normalization of coarse-grained variables. We start with signals whose amplitude has an element of arbitrariness, being dependent on the relations between electrical activity and calcium concentration, and between calcium concentration and protein fluorescence. Nonetheless, there are many moments in time when the signal is truly zero, representing the absence of activity. We want to choose a normalization that removes the arbitrariness but preserves the meaning of zero, so we set the average amplitude of the nonzero signals in each cell equal to one, and restore this normalization at each step of coarse graining. Formally, we start with variables { x i ( t ) } describing activity in each neuron i = 1 , 2 ⋯ N at time t ; since our coarse graining does not mix different moments in time, we drop this index for now. We compute the correlations c i j = ⟨ δ x i δ x j ⟩ [ ⟨ ( δ x i ) 2 ⟩ ⟨ ( δ x j ) 2 ⟩ ] 1 / 2 , (1) where δ x i = x i - ⟨ x i ⟩ . We then search for the largest nondiagonal element of this matrix, identifying the maximally correlated pair i , j * ( i ) , and construct the coarse-grained variable x i ( 2 ) = Z i ( 2 ) ( x i + x j * ( i ) ) , (2) where Z i ( 2 ) restores normalization as described above. We remove the pair [ i , j * ( i ) ] , search for the next most correlated pair, and so on, greedily, until the original N variables have become ⌊ N / 2 ⌋ pairs. We can iterate this process, generating N K = ⌊ N / K ⌋ clusters of size K = 2 k , represented by coarse-grained variables { x i ( K ) } . We would like to follow the joint distribution of variables at each step of coarse graining, but this is impossible using only a finite set of samples [18] . Instead, as in the analysis of Monte Carlo simulations [19] , we follow the distribution of individual coarse-grained variables. This distribution is a mixture of a delta function exactly at zero and a continuous density over positive values, P K ( x ) ≡ 1 N K ∑ i = 1 N K ⟨ δ ( x - x i ( K ) ) ⟩ = P 0 ( K ) δ ( x ) + [ 1 - P 0 ( K ) ] Q K ( x ) , (3) where our choice of normalization requires that ∫ 0 ∞ d x Q K ( x ) x = 1 . (4) If the coarse-grained activity of a cluster is zero, all the microscopic variables in that cluster must be zero, so that P 0 ( K ) measures the probability of silence in clusters of size K . This probability must decline with K , and in systems with a finite range of correlations this decline is exponential at large K , even if individual neurons differ in their probability of silence. Figure 3 shows the behavior of P 0 ( K ) and Q K ( x ) from the microscopic scale K = 1 to K = 256 , and we see that the data are described, across the full range of K , by P 0 ( K ) = exp ( - a K β ˜ ) , (5) with β ˜ = 0.87 ± 0.03 [20] . This scaling with β ˜ < 1 suggests that correlations among neurons are self-similar across ∼ 2.5 decades in K [21] . 3 10.1103/PhysRevLett.123.178103.f3 FIG. 3. Scaling in the probabilities of silence and activity. (left) Probability of silence as a function of cluster size. Dashed line is an exponential decay β ˜ = 1 , and the solid line is Eq. (5) . (right) Distribution of activity at different levels of coarse graining, from Eq. (3) with normalization from Eq. (4) . Larger clusters corresponds to lighter colors. Coarse graining replaces individual variables by averages over increasingly many microscopic variables. If correlations among the microscopic variables are sufficiently weak, the central limit theorem drives the distribution toward a Gaussian; a profound result of the RG is the existence of non-Gaussian fixed points. While summation of correlated variables easily generates non-Gaussian distributions at intermediate K , there is no reason to expect the approach to a fixed non-Gaussian form, as we see with Q K ( x ) on the right side in Fig. 3 . If correlations are self-similar, then we should see this in more detail by looking inside the clusters of size K , which are analogous to spatially contiguous regions in a system with local interactions. We recall that, in systems with translation invariance, the matrix of correlations among microscopic variables is diagonalized by a Fourier transform, and that the eigenvalues λ of the covariance matrix are the power spectrum or propagator G ( k ) [22] . At a fixed point of the RG this propagator will be scale invariant, λ = G ( k ) = A / k 2 - η , where the wave vector k indexes the eigenvalues from largest (at small k ) to smallest (at large k ), and in d dimensions the eigenvalue at k is of rank ∼ ( L k ) d , where L is the linear size of the system. The number of variables in the system is K ∼ ( L / a ) d , where a is the lattice spacing and the largest k ∼ 1 / a . Putting these factors together we have λ = B ( K rank ) μ , (6) with μ = ( 2 - η ) / d . Thus scale invariance implies both a power-law dependence of the eigenvalue on rank and a dependence only on fractional rank ( rank / K ) when we compare systems of different sizes. Figure 4 shows the eigenvalues of the covariance matrix, C i j = ⟨ δ x i δ x j ⟩ , in clusters of size K = 16 , 32, 64, 128; the eigenvalue spectrum of a covariance matrix is distorted by finite sample size, catastrophically so at large K , and we stop at K = 128 to avoid these problems. A power-law dependence on rank is visible (with μ = 0.71 ± 0.15 ), albeit at only over a little more than one decade; more compelling is the dependence of the spectrum on relative rank, accurate over much of the spectrum within the small error bars of our measurements. 4 10.1103/PhysRevLett.123.178103.f4 FIG. 4. Scaling in eigenvalues of the covariance matrix spectra, C i j = ⟨ δ x i δ x j ⟩ , for clusters of different sizes. Larger cluster corresponds to lighter color. Solid line is the fit to Eq. (6) . If we are near a fixed point of the RG, then in systems with local interactions we will see dynamic scaling, with fluctuations on length scale ℓ relaxing on timescale τ ∝ ℓ z . Although interactions in the neural network are not local, we have clustered neurons into blocks based on the strength of their correlations, and we might expect that larger blocks will relax more slowly. To test this, we compute the temporal correlation functions C K ( t ) = 1 N K ∑ i = 1 N K ⟨ δ x i ( K ) ( t 0 ) δ x i ( K ) ( t 0 + t ) ⟩ . (7) Qualitatively, the decay of C K ( t ) is slower at larger K , but we see in Fig. 5 that correlation functions at different K have the same form within error bars if we scale the time axis by a correlation time τ c ( K ) , which we can define as the 1 / e point of the decay. Although the range of τ c is small (as a result of the small value of z ˜ ), we see that τ c ( K ) = τ 1 K z ˜ , (8) except for the smallest K where the dynamics are limited by the response time of the fluorescent indicator molecule itself; quantitatively, z ˜ = 0.11 ± 0.01 . Error bars on τ c at K = 256 are large; hence we fit up to K = 128 ; errors at different K are necessarily correlated, which results in relatively small error bars for z ˜ . 5 10.1103/PhysRevLett.123.178103.f5 FIG. 5. Dynamic scaling. (left) Correlation functions for different cluster sizes [Eq. (7) ]. We show K = 1 , 4, 16, 64, 128 (last one with error bars), where color lightens as K increases, illustrating the scaling behavior when we measure time in units of τ c ( K ) . (right) Dependence of correlation time on cluster size, with fit to Eq. (8) . Before interpreting these results, we make several observations, explored in detail elsewhere [23] . First, and most importantly, we have done the same experiment and analysis independently in three different mice; importantly there are no “identified neurons” in the mammalian brain, so we can revisit the same region of the hippocampus in another animal, but there is no sense in which we revisit the same neurons. Nonetheless we see the same approach to a fixed distribution and power-law scaling, with exponents measured in different animals having the same values within error bars; this is true even for β ˜ , which has error bars in the second decimal place. These results suggest, strongly, that behaviors we have identified are independent of variations in microscopic detail, as we hope. Second, all the steps of analysis that we have followed here can be redone by discretizing the continuous fluorescence signals into binary on-off states for each neuron, as in Ref. [24] . Again we see an approach to a fixed non-Gaussian distribution, and power-law scaling; all exponents agree within error bars. Third, we consider the relation of our observations to the salient qualitative fact about the rodent hippocampus, namely that many of the neurons in this brain area are “place cells.” These cells are active only when the animal visits a small, compact region of space, and silent otherwise; activity in the place cell population is thought to form a cognitive map that guides navigation [25,26] . This spatial localization of activity is preserved by our coarse-graining procedure, although it was not designed specifically to do this. In fact fewer than half of the cells in the population that we study are place cells in this particular environment, but after several steps of coarse-graining essentially all of the coarse-grained variables have well developed place fields. On the other hand, the scaling behavior that we see is not a simple consequence of place field structure. To test this, we estimate for each cell the probability of being active at each position, and then simulate a population of cells that are active with this probability but independently of one another; activity is driven by the observed trajectory of the mouse along the virtual track, and to compare with the fluorescence data we smooth the activity with a kernel matched to the known dynamics of the indicator molecule. In smaller populations this independent place cell model fails to capture important aspects of the correlation structure [24] , and here we find that it does not exhibit the scaling shown in Figs. 3–5 . These behaviors also do not arise in surrogate data sets that break the correlations among neurons. Finally, we consider more generic model networks. We have simulated networks with continuous activity variables (“rate networks”) and random connections [27] , as well as networks of spiking neurons in the asynchronous-irregular regime [28] that can generate some signatures of critical behavior without fine tuning [29] . In none of these simulations do we see scaling or the emergence of fixed non-Gaussian distributions of the coarse-grained variables; more details will be given elsewhere. The absence of scaling in these simulated networks confirms the intuition from statistical physics that arriving at a fixed point of the RG, with associated scaling behaviors, is not an accident. We conclude that our observations are not artifacts of limited data, are not generic features of neural networks, and are not simple consequences of known features of the neural response in this particular network. In equilibrium statistical mechanics problems with local interactions, a fixed distribution and power-law scaling behaviors are signatures of a system poised near a critical point in its phase diagram. The idea that networks of neurons might be near to criticality has been discussed for more than a decade [30] . One version of this idea focuses on “avalanches” of sequential activity in neurons [31,32] , by analogy to what happens in the early sandpile models for self-organized criticality [33] . In the human brain, it has been suggested that the large scale patterns of physical connectivity may be scale free or self-similar, providing a basis for self-similarity in neural activity [34,35] . A different version of the idea focuses on the distribution over microscopic states in the network at a single instant of time [36,37] , and is more closely connected to criticality in equilibrium statistical mechanics. Related ideas have been explored in other biological systems, from biochemical and genetic networks [38–41] to flocks and swarms [42,43] . In our modern view, invariance of probability distributions under iterated coarse graining—a fixed point of the renormalization group—may be the most fundamental test for criticality, and has meaning independent of analogies to thermodynamics. A fundamental result of the RG is the existence of irrelevant operators, which means that successive steps of coarse graining lead to simpler and more universal models. Although the RG transformation begins by reducing the number of degrees of freedom in the system, simplification does not result from this dimensionality reduction but rather from the flow through the space of models. The fact that our phenomenological approach to coarse-graining gives results that are familiar from successful applications of the RG in statistical physics encourages us to think that simpler and more universal theories of neural network dynamics are possible. We thank S. Bradde, A. Cavagna, D. S. Fisher, I. Giardina, M. O. Magnasco, S. E. Palmer, and D. J. Schwab for helpful discussions. Work supported in part by the National Science Foundation through the Center for the Physics of Biological Function (Grant No. PHY–1734030), the Center for the Science of Information (Grant No. CCF–0939370), and Grant No. PHY–1607612; by the Simons Collaboration on the Global Brain; by the National Institutes of Health (Grant No. 1R01EB026943–01); and by the Howard Hughes Medical Institute. [1] 1 L. P. Kadanoff , Physics 2 , 263 ( 1966 ). PHYSGM 1943-2879 10.1103/PhysicsPhysiqueFizika.2.263 [2] 2 K. G. Wilson , Rev. Mod. Phys. 47 , 773 ( 1975 ). RMPHAT 0034-6861 10.1103/RevModPhys.47.773 [3] 3 K. G. Wilson , Sci. Am. 241 , 158 ( 1979 ). 10.1038/scientificamerican0879-158 [4] 4 J. Cardy , Scaling and Renormalization in Statistical Physics ( Cambridge University Press , Cambridge, England, 1996 ). [5] 5 R. Segev , J. Goodhouse , J. L. Puchalla , and M. J. Berry II , Nat. Neurosci. 7 , 1155 ( 2004 ). NANEFN 1097-6256 10.1038/nn1323 [6] 6 A. M. Litke , IEEE Trans. Nucl. Sci. 51 , 1434 ( 2004 ). IETNAE 0018-9499 10.1109/TNS.2004.832706 [7] 7 C. D. Harvey , F. Collman , D. A. Dombeck , and D. W. Tank , Nature (London) 461 , 941 ( 2009 ). NATUAS 0028-0836 10.1038/nature08499 [8] 8 O. Marre , D. Amodei , N. Deshmukh , K. Sadeghi , F. Soo , T. E. Holy , and M. J. Berry , J. Neurosci. 32 , 14859 ( 2012 ). JNRSDS 0270-6474 10.1523/JNEUROSCI.0723-12.2012 [9] 9 J. J. Jun , Nature (London) 551 , 232 ( 2017 ). NATUAS 0028-0836 10.1038/nature24636 [10] 10 J. E. Chung , H. R. Joo , J. L. Fan , D. F. Liu , A. H. Barnett , S. Chen , C. Geaghan-Breiner , M. P. Karlsson , M. Karlsson , K. Y. Lee , H. Liang , J. F. Magland , J. A. Pebbles , A. C. Tooker , L. Greengard , V. M. Tolosa , and L. M. Frank , Neuron , 101 , 21 ( 2019 ). 10.1016/j.neuron.2018.11.002 [11] 11 D. A. Dombeck , C. D. Harvey , L. Tian , L. L. Looger , and D. W. Tank , Nat. Neurosci. 13 , 1433 ( 2010 ). NANEFN 1097-6256 10.1038/nn.2648 [12] 12 C. D. Harvey , P. Coen , and D. W. Tank , Nature (London) 484 , 62 ( 2012 ). NATUAS 0028-0836 10.1038/nature10918 [13] 13 Y. Ziv , L. D. Burns , E. D. Cocker , E. O. Hamel , K. K. Ghosh , L. J. Kitch , A. El Gamal , and M. J. Schnitzer , Nat. Neurosci. 16 , 264 ( 2013 ). NANEFN 1097-6256 10.1038/nn.3329 [14] 14 J. P. Nguyen , F. B. Shipley , A. N. Linder , G. S. Plummer , M. Liu , S. U. Setru , J. W. Shaevitz , and A. M. Leifer , Proc. Natl. Acad. Sci. U.S.A. 113 , E1074 ( 2016 ). PNASA6 0027-8424 10.1073/pnas.1507110112 [15] 15 J. L. Gauthier and D. W. Tank , Neuron 99 , 179 ( 2018 ). NERNET 0896-6273 10.1016/j.neuron.2018.06.008 [16] Since we are using the strongest correlations, our coarse-graining procedure is not very sensitive to the spurious correlations that arise from finite sample size. [17] 17 As an example, we could assign each neuron coordinates r i in a D dimensional space such that the correlations c i j [Eq. (1)] are a (nearly) monotonic function of the distance d i j = | r i - r j | ; this is multidimensional scaling: J. B. Kruskal , Psychometrika 29 , 1 ( 1964 ). As pointed out to us by MO Magnasco, our coarse-graining procedure then starts as local averaging in this abstract space, and it would be interesting to take this embedding seriously as way of recovering locality of the RG transformation. Other alternatives include using a more global, self-consistent definition of maximally correlated pairs, or using delayed correlations. More generally we can think of coarse graining as data compression, and we could use different metrics to define what is preserved in this compression. 0033-3123 10.1007/BF02289565 [18] When we use the renormalization group to study models, we indeed follow the flow of the joint distribution in various approximations [2,4] . When we are trying to analyze data, either from experiments or from simulations, this is not possible. [19] 19 K. Binder , Z. Phys. B 43 , 119 ( 1981 ). ZPCMDN 0722-3277 10.1007/BF01293604 [20] Error bars for all scaling behaviors reported in this Letter were estimated as the standard deviation across quarters of the data. To respect temporal correlations, the location of the quarter was chosen at random, but time points remained in order inside it. For each quarter of the data, exponents are estimated as the slope of the best fit line on a log-log scale. We have also verified that there is no systematic dependence of our results on sample size. [21] Scaling usually is an asymptotic behavior, but, in Fig. 3 , we see a power law across almost the full range from K = 1 to K = 512 . If we fit only for K ≥ 32 , for example, we find the same value of β ˜ within error bars; there is no sign that these larger values of K are more consistent with β ˜ = 1 . Thanks to D. S. Fisher for asking about this. [22] 22 For a discussion of the relation between RG and the spectra of covariance matrices, see S. Bradde and W. Bialek , J. Stat. Phys. 167 , 462 ( 2017 ). JSTPBS 0022-4715 10.1007/s10955-017-1770-6 [23] 23 L. Meshulam , arXiv:1812.11904 . [24] 24 L. Meshulam , J. L. Gauthier , C. D. Brody , D. W. Tank , and W. Bialek , Neuron 96 , 1178 ( 2017 ). NERNET 0896-6273 10.1016/j.neuron.2017.10.027 [25] 25 J. O’Keefe and J. Dostrovsky , Brain Res. 34 , 171 ( 1971 ). BRREAP 0006-8993 10.1016/0006-8993(71)90358-1 [26] 26 J. O’Keefe and L. Nadel , The Hippocampus as a Cognitive Map ( Oxford , Clarendon Press, 1978 ). [27] 27 T. P. Vogels , K. Rajan , and L. F. Abbott , Annu. Rev. Neurosci. 28 , 357 ( 2005 ). ARNSD5 0147-006X 10.1146/annurev.neuro.28.061604.135637 [28] 28 N. J. Brunel , J. Comp. Neurol. 8 , 183 ( 2000 ). JCNEAM 0021-9967 10.1023/A:1008925309027 [29] 29 J. Touboul and A. Destexhe , Phys. Rev. E 95 , 012413 ( 2017 ). PRESCM 2470-0045 10.1103/PhysRevE.95.012413 [30] 30 T. Mora and W. Bialek , J. Stat. Phys. 144 , 268 ( 2011 ). JSTPBS 0022-4715 10.1007/s10955-011-0229-4 [31] 31 J. M. Beggs and D. Plenz , J. Neurosci. 23 , 11167 ( 2003 ). JNRSDS 0270-6474 10.1523/JNEUROSCI.23-35-11167.2003 [32] 32 N. Friedman , S. Ito , B. A. W. Brinkman , M. Shimono , R. E. Lee DeVille , K. A. Dahmen , J. M. Beggs , and T. C. Butler , Phys. Rev. Lett. 108 , 208102 ( 2012 ). PRLTAO 0031-9007 10.1103/PhysRevLett.108.208102 [33] 33 P. Bak , C. Tang , and K. Wiesenfeld , Phys. Rev. Lett. 59 , 381 ( 1987 ). PRLTAO 0031-9007 10.1103/PhysRevLett.59.381 [34] 34 M Zheng , arXiv:1904.11793 . [35] 35 R. F. Betzel and D. S. Bassett , NeuroImage 160 , 73 ( 2017 ). NEIMEF 1053-8119 10.1016/j.neuroimage.2016.11.006 [36] 36 G. Tkačik , E. Schneidman , M. J. Berry II , and W. Bialek , arXiv:q-bio/0611072 . [37] 37 G. Tkačik , T. Mora , O. Marre , D. Amodei , S. E. Palmer , M. J. Berry , and W. Bialek , Proc. Natl. Acad. Sci. U.S.A. 112 , 11508 ( 2015 ). PNASA6 0027-8424 10.1073/pnas.1514188112 [38] 38 J. E. S. Socolar and S. A. Kauffman , Phys. Rev. Lett. 90 , 068702 ( 2003 ). PRLTAO 0031-9007 10.1103/PhysRevLett.90.068702 [39] 39 P. Ramo , J. Kesseli , and O. Yli-Harja , J. Theor. Biol. 242 , 164 ( 2006 ). JTBIAP 0022-5193 10.1016/j.jtbi.2006.02.011 [40] 40 M. Nykter , N. D. Price , M. Aldana , S. A. Ramsey , S. A. Kauffman , L. E. Hood , O. Yli-Harja , and I. Shmulevich , Proc. Natl. Acad. Sci. U.S.A. 105 , 1897 ( 2008 ). PNASA6 0027-8424 10.1073/pnas.0711525105 [41] 41 D. Krotov , J. O. Dubuis , T. Gregor , and W. Bialek , Proc. Natl. Acad. Sci. U.S.A. 111 , 3683 ( 2014 ). PNASA6 0027-8424 10.1073/pnas.1324186111 [42] 42 W. Bialek , A. Cavagna , I. Giardina , T. Mora , O. Pohl , E. Silvestri , M. Viale , and A. M. Walczak , Proc. Natl. Acad. Sci. U.S.A. 111 , 7212 ( 2014 ). PNASA6 0027-8424 10.1073/pnas.1324045111 [43] 43 A. Cavagna , D. Conti , C. Creato , L. Del Castello , I. Giardina , T. S. Grigera , S. Melillo , L. Parisi , and M. Viale , Nat. Phys. 13 , 914 ( 2017 ). NPAHAX 1745-2473 10.1038/nphys4153
PY - 2019/10/23
Y1 - 2019/10/23
N2 - We develop a phenomenological coarse-graining procedure for activity in a large network of neurons, and apply this to recordings from a population of 1000+ cells in the hippocampus. Distributions of coarse-grained variables seem to approach a fixed non-Gaussian form, and we see evidence of scaling in both static and dynamic quantities. These results suggest that the collective behavior of the network is described by a nontrivial fixed point.
AB - We develop a phenomenological coarse-graining procedure for activity in a large network of neurons, and apply this to recordings from a population of 1000+ cells in the hippocampus. Distributions of coarse-grained variables seem to approach a fixed non-Gaussian form, and we see evidence of scaling in both static and dynamic quantities. These results suggest that the collective behavior of the network is described by a nontrivial fixed point.
UR - http://www.scopus.com/inward/record.url?scp=85074495928&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85074495928&partnerID=8YFLogxK
U2 - 10.1103/PhysRevLett.123.178103
DO - 10.1103/PhysRevLett.123.178103
M3 - Article
C2 - 31702278
AN - SCOPUS:85074495928
VL - 123
JO - Physical Review Letters
JF - Physical Review Letters
SN - 0031-9007
IS - 17
M1 - 178103
ER -