Work in the Visuomotor Learning Lab (VMLL) is aimed at understanding how the brain combines different forms of sensory and motor information to help plan, execute, and adapt movements (‘sensorimotor integration’). We are particularly interested in how uncertainty associated with movement planning and execution leads to variability in motor performance. Sensorimotor integration is currently probed in VMLL using a combination of three approaches:
- single and multiunit recordings in awake behaving non-human primates
- human psychophysical experiments
- computational modeling and simulation techniques.
Two laboratories currently support this research, one for non-human primates and one for humans. Both laboratories employ state-of-the-art motion tracking and display technologies, including semi-immersive 3D virtual reality environments. The long-term goals of our research are to improve motor function in individuals with impaired sensorimotor integration and to augment ‘normal’ motor performance through the development of brain-centered training protocols and assistive technologies that interface directly with the nervous system.