Thomas SerreEdit My Page
My long-term goal is to help realize one of the oldest dreams in artificial intelligence: To reverse-engineer the brain and build machines that can see and interpret the visual world as well as we do. Achieving such an ambitious goal would give scientists a powerful tool to uncover and understand key mechanisms of human perception and cognition as well as to create a new generation of "seeing" machines.
Dr Serre received a PhD in computational neuroscience from the Massachusetts Institute of Technology (MIT) in 2006 and a master degree in EECS from the Ecole Nationale Supérieure des Télécommunications de Bretagne (Brest, France) in 2000. His research focuses on understanding the brain mechanisms underlying the recognition of objects and complex visual scenes using a combination of behavioral, imaging and physiological techniques. These experiments fuel the development of quantitative computational models that try not only to mimic the processing of visual information in the cortex but also to match human performance in complex visual tasks.
Together with Tomaso Poggio and colleagues at MIT he has developed a large-scale computational model of visual recognition in cortex. This research was featured in the BBC series "Visions from the Future" and appeared in several news articles (The Economist, New Scientist, Scientific American, IEEE Computing in Science and Technology, Technology Review and EyeNet) and a post in Slashdot.
Most of the work in visual neuroscience has focused on the brain mechanisms underlying the rapid recognition of simple visual scenes using artificial, static and isolated stimuli. However our visual world is both highly dynamic and complex, with typical visual scenes consisting of many objects embedded in background clutter. The result: Our visual cortex must process noisy and ambiguous perceptual measurements. The success of everyday vision implies powerful neural mechanisms, yet to be understood, for combining bottom-up, sensory-driven information with top-down, attention and memory-driven processes to help resolve visual ambiguities and discount irrelevant clutter. Our lab studies this fundamental question using innovations in machine learning (e.g., Cauchoix et al, 2012; Reddy et al, 2010; Kliper et al, 2010; Zhang et al, 2010) and computational modeling (see Serre & Poggio, 2010 for a recent review).
Sheridan Junior Faculty Teaching Fellows Program (20112012)
Brown Institute for Brain Science
Center for Vision Research
Project title: Towards a human-level neuromorphic artificial visual system
Funding agency: Defense Advanced Research Projects Agency (DARPA)
Grant type: DARPA Grant Application
Grant number: DARPA-BAA-09-31
Award: Phase I: $543,331.65. Options (Phase II + III): $1,091,720.72.
Duration: 54 months
Status: Phase I funded
Project title: Towards a biologically-inspired vision system for the control of navigation in complex environments
Funding agency: Office of Naval Research (ONR)
Grant type: ONR Grant Application
Grant number: BAA-01-11
Duration: 36 months
Project title: Brain-like computing system for analyzing visual scenes
Grant type: Robert J. and Nancy D. Carney Fund for Scientific Innovation
Computational models of vision and cognition.
- Computational Cognitive Science (COGS 01291)
- Computational Vision (COGS 01520)
- Topics in Perception: Scene Understanding (COGS 01580A)
- Y. Zhang*, E. Meyers*, N. Bichot, T. Serre, T. Poggio & R. Desimone. Object decoding with attention in inferior temporal cortex. Proceedings of the National Academy of Sciences. doi:10.1073/pnas.1100999108(2011)
- T. Serre and T. Poggio. A neuromorphic approach to computer vision. Communications of the ACM. 53(10), pp. 54-61, Oct 2010(2010)
- L. Reddy, N. Tsuchyia & T. Serre. Reading the mind's eye: Decoding object Information during mental Imagery from fMRI patterns. Neuroimage, 50(2), pp. 818-825, Apr. 2010(2010)
- S. Chikkerur, T. Serre, C. Tan and T. Poggio. What and where: A Bayesian inference theory of attention. Vision Research. 55(22), pp. 22332247, Oct 2010(2010)
- H. Jhuang, E. Garrote, X. Yu, V. Khilnani, T. Poggio and A. Steele and T. Serre. Automated home-cage behavioral phenotyping of mice. In: Nature Communications. 1(1), doi:10.1038/ncomms1064, 2010(2010)