Neuroengineering/Brain-Machine Interfaces

The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

Introduction
Brain-Machine Interfaces
Optogenetic Mapping: Neurotechnology Renaissance
Ubiquious Computing and Augmented Reality
The Affective Turn: Emotional Branding, Neuromarketing, and the New, New Media
Concluding Thoughts

Brain-Machine Interfaces



Since the late 1990s the fields of brain sciences and neuroengineering have produced an astonishing array of discoveries that hold out the prospect of far reaching medical advances for the treatment of paralysis, limb loss, and a number of neurological impairments by interfacing intact neural structures with artificial neuroprosthetics devices. Among the most successful and justly celebrated sensory neuroprosthetics devices are cochlear and retinal implants that use electrical stimulation to recreate or partially restore perceptual capability. Niels Birnbaumer (Tübingen University) and his colleagues have developed brain-machine interfaces using scalp electroencephalography (EEG) signals which address critical clinical problems such as communication in “locked-in” patients and movement restoration in patients with spinal cord lesions and chronic stroke. Recently Brain-Computer Interface (BCI) technology has also been used for non-medical purposes, giving rise to a new generation of measurement devices that allow access and decoding of macroscopic brain states such as attention, performance capability, and emotion, in real-time. The signals extracted by BCI techniques are then used to improve and optimize man–machine interaction, enhancing human performance and even developing novel types of skills. Benjamin Blankertz, Michael Tangermann, Klaus-Robert Müller and their colleagues at the Machine Learning Lab of the TU Berlin have recently extended these devices into interfaces for videogames and other forms of interactive entertainment.

These initial breakthroughs in neuroengineering gain high praise for their contributions to rehabilitation medicine but they quickly fuel the fantasies of futurologists who imagine not just replacement parts for the neurologically impaired but the augmentation of human abilities through improved memory and analytics capabilities, preparing the ground for a future fusion of artificial intelligent agents with humans in a posthuman singularity. And it is not just the hearts of futurologists and Isaac Asimov science fiction fans that palpate over Brain-Machine Interface technology. The US Defense Advanced Research Projects Agency (DARPA) is one of the biggest sponsors of BMI research. With its Human Assisted Neural Devices Program (HANDP) funded since 2002, DARPA’s stated goal has been first to create novel concepts that will improve warfighter performance on the battlefield and second to improve prosthetic technology for severely injured veterans.

Neural Ensembles and the Neural Code

This is not the context to go into detail about the history of brain-machine interfaces, but I do want to point to several features of this work that have challenged some canonical assumptions about the brain and opened up new directions for thinking about the future relation of humans and machines in a coming merger of the virtual and the real.

First and foremost is the radical transformation introduced by the concept of neuronal ensemble recording, where populations of neurons are followed rather than single neurons as has been the case in traditional behavioral neuroscience. Up until the late 1980s single neuron recordings were the mainstay of neuroscience. In main part this approach was dictated by the measuring technology of the day. But during the past 25 years, the introduction of new electrophysiological and imaging methods has allowed neurophysiologists to measure the concurrent activity of progressively larger samples of single neurons in behaving animals. The shift in thinking about multi-electrode recording occurred in parallel with the development of Brain-Machine Interfaces. Instrumentation and new techniques of measurement have also transformed (are in the process of rewriting) what we know about brain physiology. Single neuron recording went hand-in-hand with the localization theory of brain function: the notion, treated as bedrock of the science by most neurophysiologists, that the cerebral cortex is divided into highly localized regions of visual, auditory, tactile, motor, olfactory and gustatory centers. These core areas were then subdivided into specialized regions for color, motion detection, face recognition, and other complex functions. Going even further, individual neurons have been labeled as visual neurons, mirror neurons, face neurons, touch neurons, and even “grandmother neurons” (Nicolelis, 2011, 46). Among the most cherished doctrines of this era of brain localization was the notion, based on discoveries by Vernon Mountcastle in 1955, that these highly localized somatosensory regions of the cortex are organized into neat columns. Mountcastle’s work appeared to establish that for a common receptive field location (e.g. the cat’s foreleg) cells were segregated into domains representing different sensory modalities. Mountcastle hypothesized there is an elementary unit of organization in the somatic cortex made up of a vertical group of cells extending through all the cellular layers. He termed this unit a ‘column’. By making multiple, closely spaced penetrations with his single neuron recording, Mountcastle concluded that individual columns are no more than 500 mm wide and intermingled in a mosaic-like fashion. These blocks of tissue contain neurons whose salient physiological properties are identical. (Reprised and reviewed in Mountcastle, 1997)

The advent of neural ensemble recording has called the existence of these columns into question and replaced the static architectonic picture of the brain grounded in fixed functional regions by a highly dynamic model of the brain that emphasizes spatiotemporal flows. In place of behaviors being localized to specific brain regions, the new model has a number of radically new features, including the following: 1) the representation of any behavioral parameter is distributed across many brain areas; 2) single neurons are insufficient for encoding a given parameter; 3) individual neurons do not have a one-to-one relationship to a particular motor parameter, but rather, a single neuron is informative of several behavioral parameters—individual neurons multitask; 4) a certain minimal threshold number of neurons in a population is needed for their information capacity to stabilize at a sufficiently high value; 5) the same behavior can be produced by different neuronal assemblies; and finally 6) the primacy of neural plasticity—neural ensemble function is crucially dependent on the capacity to plastically adapt to new behavioral tasks.(Nicolelis, 2009, 532)

This neuronal ensemble perspective has been enabled by a new generation of recording devices in the form of multiple, arrayed microelectrodes (up to 400 in some experiments) that can be surgically implanted across several areas of the somatosensory cortex capable of simultaneously recording the firing of local populations of neurons in the vicinity of the electrodes. The Utah Intracortical Electrode Array developed by Maynard, Nordhausen, and Normann in the late 1990s was the core technology for enabling the first generation of brain machine interfaces. Additional crucial enabling elements have been the development of electronics for sampling, filtering and amplifying neural signals from the electrodes and fast computers and software for extracting meaningful patterns out of the storm of electrical pulses detected by the microarray recording devices. Using sophisticated data-mining techniques and algorithms from artificial neural networks scientist/neuroengineers such as Miguel Nicolelis are able to detect the neural codes for motor commands, such as controlled arm and hand motion, grasping, walking, and other sensorimotor actions.

These components form the basis of a Brain-Machine Interface. In their now classic experiments Nicolelis, John Chapin, and their team of graduate students and postdocs surgically inserted microwire recording arrays in six areas of the somatocortex of an owl monkey named Aurora (they have also worked with hooded rats and rhesus monkeys) who had been trained to play a videogame. Aurora operated a joystick that moved a circular cursor across a video screen in pursuit of a target. If she successfully got the target within a specified time period, she would be rewarded with a drink of her favorite juice. Once Aurora had been trained on this task the neural signals representing her arm, hand and wrist movements controlling the joystick were captured and converted to digital instructions for operating a robot arm. As Aurora would play the game, the robot arm controlling a second joystick would mirror the same movements as Aurora’s game play, gradually improving in accuracy as the experiment went on. Visual feedback allowed Aurora to see that her movements were being copied by the robot arm. After playing the game in this fashion for several days Nicolelis took away Aurora’s joystick and attached the cursor control to the wrist of the robot. Somewhat befuddled, Aurora sat for a while, and then after a few minutes began moving her arm as if the phantom joystick were there, while the robot arm completed the task and got Aurora her juice reward. Even more remarkable, after several experiments of this sort, Aurora realized that she didn’t need to move her arm at all but simply by imagining the movements she would make to capture the target the robot would do the trick for her. There have been a number of variations on these experiments, including having the robot arm located at MIT but visible via a television screen to Aurora back at Duke. This situation worked as long as the lag time did not exceed 250-300 milliseconds. Another spectacular demonstration of the brain machine interface involved a rhesus monkey walking on a treadmill. In a similar fashion the realtime capture of the monkey’s brain signals controlling the gait on the treadmill were converted to a program operating the legs of a robot in Tokyo visible on a video monitor. The monkey was rewarded for learning that her gate on the treadmill controlled the gait of the robot, speeding up, slowing down, and stopping based on her own gait. After performing this game for an hour the monkey’s treadmill was turned off, but she quickly realized that by imagining her own leg movements, she could control the Tokyo robot and receive her juice reward.

An interesting feature of these experiments was that as the animal shifted between normal and brain control mode (without moving its arms or legs) a subset of the recorded cortical neurons ceased to fire. Perhaps more surprisingly, a fraction of the recorded cortical neurons showed clear velocity and direction-tuning that was related to the movements of the robotic prosthesis but not to the displacement of the animal’s own arms. Such tuning developed and became sharper during the period in which monkeys learned to operate the BMI without execution of overt body movements (brain control mode). As animals shifted back and forth between using their own limbs or the artificial actuator controlled by the BMI to solve a particular motor task, functional coupling between pairs of cortical neurons adapted dynamically.

Nicolelis draws the important conclusion from this “that, at its limit, cortical plasticity may allow artificial tools to be incorporated as part of the multiple functional representations of the body that exist in the mammalian brain. If this proves to be true, we would predict that continuous use of a BMI should induce subjects to perceive artificial prosthetic devices, such as prosthetic arms and legs, controlled by a BMI as part of their own bodies. Such a prediction opens the intriguing possibility that the representation of self does not necessarily end at the limit of the body surface, but can be extended to incorporate artificial tools under the control of the subject’s brain. BMI research further stretches this puzzling idea by demonstrating that, once brain activity is recorded and decoded efficiently in real time, its capacity to control artificial devices can undergo considerable modification in terms of temporal, spatial, kinematic and kinetic characteristics, termed scaling. In other words, not only can a BMI enact voluntary motor outputs faster than the subject’s biological apparatus (temporal scaling), but it can also accomplish motor tasks at a distance from the subject’s own body (spatial scaling), by controlling an actuator that is either considerably smaller (for example, a nano-tool) or considerably larger (for example, a crane) than the subject’s own biological appendices. (Nicolelis, 2009, 535-536.)

Sharing Brain States

In a follow-on set of experiments the Nicolelis lab has experimented with transferring the brain state of an animal—in this case a hooded rat—to another rat through a direct brain-to-brain interface. In the experiment one rat is the “explorer” trained to use its facial whiskers to determine the diameter of an aperture in the dark. The goal of the experiment is to find the aperture that is the right size to let the rat through to get a reward. The “explorer” rats trained to do this in the Nicolelis experiment were successful more than 90 percent of the time in selecting the correct aperture and getting the reward within 150 milliseconds. In the next phase of the experiment a second rat that had also been trained in the tactile discrimination task is placed in a separate box, but it is not allowed to use its own whiskers to determine the width of the aperture and get the reward. Instead, the explorer rat’s brain activity is transmitted wirelessly to the second (decoder) rat. This decoder rat pokes its head in one of two spots on the wall indicating which aperture to select to get the reward, and it cannot use its own experience sensitive whiskers to make the choice but must select on the basis of the stimulus pattern it receives from the explorer rat. If the decoder rat selects the correct aperture, it is rewarded, and the explorer rat is given an extra bonus reward for successfully transmitting its perceptual experience to the decoder partner. The idea here is that the decoder rat cooperates virtually with the explorer rat and in fact expands its own body image to incorporate the whiskers of the explorer rat as if they were its own. More complicated versions of this experiment are also being attempted, including a brain interface involving an intermediary layer of rats in which rats trained in exploring different aspects of an environment or object are allowed to share their perceptions and form a consensus.(Nicolelis, 2011, 247-249) (More: Optogenetic Mapping)


References

Mountcastle, Vernon B. "An Organizing Principle for Cerebral Function: The Unit Module and the Distributed System." In The Mindful Brain, edited by Gerald M. Edelman and Vernon B Mountcastle, 7-50. Cambridge, Mass.: MIT Press, 1978.

Mountcastle, Vernon B. "The Columnar Organization of the Neocortex." Brain 120, no. 4 (1997): 701-22.

Birbaumer, Niels. "Brain-Computer-Interface Research: Coming of Age." Clinical Neurophysiology 117, no. 3 (2006): 479-83.

Blankertz, Benjamin, et al. "The Berlin Brain-Computer Interface: Non-Medical Uses of Bci Technology." Frontiers in Neuroscience 4 (2010).

Bach-y-Rita, Paul, and Stephen W. Kercel. "Sensory Substitution and the Human-Machine Interface." Trends in Cognitive Sciences 7, no. 12 (2003): 541-46.

Horton, Jonathan C, and Daniel L Adams. "The Cortical Column: A Structure without a Function." Philosophical Transactions of the Royal Society B: Biological Sciences 360, no. 1456 (2005): 837-62.

Jenkins, W.M. et al. "Functional Reorganization of Primary Somatosensory Cortex in Adult Owl Monkeys after Behaviorally Controlled Tactile Stimulation." Journal of Neurophysiology 63 (1990): 82-104.

Maynard, Edwin M., Craig T. Nordhausen, and Richard A. Normann. "The Utah Intracortical Electrode Array: A Recording Structure for Potential Brain-Computer Interfaces." Electroencephalography and Clinical Neurophysiology 102, no. 3 (1997): 228-39.

Nordhausen, Craig T., Edwin M. Maynard, and Richard A. Normann. "Single Unit Recording Capabilities of a 100 Microelectrode Array." Brain Research 726, no. 1-2 (1996): 129-40.

Nicolelis, Miguel A. L, LA Baccala, RC Lin, and JK Chapin. "Sensorimotor Encoding by Synchronous Neural Ensemble Activity at Multiple Levels of the Somatosensory System" Science 268 no. 5215, 2 June (1995): 1353-58.

Chapin, John K., Karen A. Moxon, Ronald S. Markowitz, and Miguel A. L. Nicolelis. "Real-Time Control of a Robot Arm Using Simultaneously Recorded Neurons in the Motor Cortex." Nature Neuroscience 2, no. 7 (1999): 664-70.

Nicolelis, Miguel A. L, and John K. Chapin. "Controlling Robots with the Mind." Scientific American, no. October (2002): 46-53.

Nicolelis, Miguel A. L., Asif A. Ghazanfar, Barbara M. Faggin, Scott Votaw, and Laura M. O. Oliveira. "Reconstructing the Engram: Simultaneous, Multisite, Many Single Neuron Recordings." Neuron 18, no. 4 (1997): 529-37.

Nicolelis, Miguel A. L., and Mikhail A. Lebedev. "Principles of Neural Ensemble Physiology Underlying the Operation of Brain-Machine Interfaces." Nature Reviews Neuroscience 10, no. 7 (2009): 530-40.

Nicolelis, Miguel A. L. Beyond Boundaries: The New Neuroscience of Connecting Brains with Machines--and How It Will Change Our Lives. New York: Times Books, 2011.