A history of computational neuroscience: Still searching for the engram
Marsha R. PENNER and S.N. BURKE
The last decade of the twentieth century witnessed tremendous growth in neurobiology and in information technology, and many aspects of these developments have merged to facilitate new computational approaches to studying and understanding the brain. Computational neuroscience refers to a field devoted to interpreting the information content of neuronal signals by modeling many levels of the nervous system. Although in the grand scheme of things, the field of computational neuroscience may seem new, the central question this field addresses is certainly not: how is information represented and stored in the brain? At one level, realistic brain models involve large-scale simulations that include as much cellular detail as possible. For example, at the level of a single neuron, the Hodgkin-Huxley (1952) model of the action potential in the squid giant axon describes the velocity and shape of the action potential with great accuracy. At the network level, simplifying brain models consider how to interpret the information encoded by the activity of a large neuronal population. In his now famous book, D.O. Hebb (1949) was one of the first to describe a mechanism whereby information can be represented in the brain in ensembles of nerve cells he called cell assemblies and phase sequences. Other notable contributions include that of Pitts and McCulloch who (1947) addressed the issue of pattern recognition; Steinbuch (1961) who proposed the learning matrix that is the starting point for most other computational models; and David Marr who introduced the notion of feedforward inhibition to the learning matrix. Of course, there are many others who have also made substantial contributions to this field. In this presentation, the contributions of these pioneers will be discussed as well as the work of other less well-known, and contemporary scientists.
Session VI -- Poster Session 1
Montreal, Quebec, Canada