Neurofutures

Neurofutures: Brain-Machine Interfaces and Collective Minds.
The Affective Turn and the New, New Media.

Tim Lenoir
Duke University

Introduction


The first practical steps of augmenting human capability through a close coupling of man and machine have their origins in Ivan Sutherland’s work at MIT and the University of Utah and in work by the generation of students Sutherland and his colleague, David Evans, trained at the University of Utah. Having launched the field of interactive computer-aided design in his dissertation project, Sketchpad, between 1965-1968 Sutherland pursued an ambitious project to create what he called “the ultimate display,” an augmented reality system in which computer generated images of all sorts could be overlaid on scenes viewed through a head-mounted camera display system. Among the visionary suggestions Sutherland made in this early work was that interaction with the computer need not be based on keyboard or joystick linkages but could be controlled through computer-based sensing of the positions of almost any of our body muscles; and going further, he noted that while gestural control through hands and arms were obvious choices, machines to sense and interpret eye motion data could and would be built. “An interesting experiment, he claimed, “will be to make the display presentation depend on where we look.” Sutherland’s work inspired Scott Fisher, Brenda Laurel, and Jaron Lanier, the inventors of the dataglove and first virtual reality and telepresence systems at NASA-Ames Research Center, and Tom Furness at Wright-Patterson Air Force Base in Ohio, who developed his own version of the ultimate display, based on eye and gesture tracking as a quasi “Darth-Vader Helmet” and integrated virtual cockpit. Furness was trying to solve problems of how humans interact with very complex machines, particularly the new high-tech F-16, F-14 and F-18 fighter planes, which were becoming so complicated that the amount of information a fighter pilot had to assimilate from the cockpit's instruments and command communications had become overwhelming. Furness’ solution was a cockpit that fed 3-D sensory information directly to the pilot, who could then fly by nodding and pointing his way through a simulated landscape below.

These pathbreaking projects on augmented and virtual reality, and telepresence controlled by gesture and eye-tracking systems inspired a number of visionary efforts over the next generation to go all the way in creating the ultimate display by eliminating the screen and tethered systems depicted above altogether by directly interfacing brains and machines. In what follows I will trace lines of synergy and convergence among several areas of neuroscience, genetics, engineering, and computational media that have given rise to brain/computer/machine interfaces that may at first glance seem like the stuff of science fiction or the techno-enthusiast predictions of Singularians and Transhumanists but may be closer than you think to being realized and quite possibly transforming human being as we know it in radical ways. I begin with work in brain-machine interfaces currently used in therapeutic neuroprosthetics emanating from the pioneering work of the Utah Intracortical Electrode Array, engage with the visionary speculations neuroengineers such as Miguel Nicolelis at Duke on their future deployment in ubiquitous computing networks, and contemplate the implications of these prospective developments for reconfigured selves. The second area I will explore is the convergence of work in the cognitive neurosciences on the massive role of affect in decision making and the leveraging of next-generation social media and smart devices as the “brain-machine” interfaces for measuring, data mining, modeling, and mapping affect in strategies to empower individuals to be more efficient, productive, and satisfied members of human collectives. If these speculations have merit, we may want to invest in “neurofutures”—very soon.


Brain-Machine Interfaces

Since the late 1990s the fields of brain sciences and neuroengineering have produced an astonishing array of discoveries that hold out the prospect of far reaching medical advances for the treatment of paralysis, limb loss, and a number of neurological impairments by interfacing intact neural structures with artificial neuroprosthetics devices. Among the most successful and justly celebrated sensory neuroprosthetics devices are cochlear and retinal implants that use electrical stimulation to recreate or partially restore perceptual capability. Niels Birnbaumer (Tübingen University) and his colleagues have developed brain-machine interfaces using scalp electroencephalography (EEG) signals which address critical clinical problems such as communication in “locked-in” patients and movement restoration in patients with spinal cord lesions and chronic stroke. Recently Brain-Computer Interface (BCI) technology has also been used for non-medical purposes, giving rise to a new generation of measurement devices that allow access and decoding of macroscopic brain states such as attention, performance capability, and emotion, in real-time. The signals extracted by BCI techniques are then used to improve and optimize man–machine interaction, enhancing human performance and even developing novel types of skills. Benjamin Blankertz, Michael Tangermann, Klaus-Robert Müller and their colleagues at the Machine Learning Lab of the TU Berlin have recently extended these devices into interfaces for videogames and other forms of interactive entertainment.

These initial breakthroughs in neuroengineering gain high praise for their contributions to rehabilitation medicine but they quickly fuel the fantasies of futurologists who imagine not just replacement parts for the neurologically impaired but the augmentation of human abilities through improved memory and analytics capabilities, preparing the ground for a future fusion of artificial intelligent agents with humans in a posthuman singularity. And it is not just the hearts of futurologists and Isaac Asimov science fiction fans that palpate over Brain-Machine Interface technology. The US Defense Advanced Research Projects Agency (DARPA) is one of the biggest sponsors of BMI research. With its Human Assisted Neural Devices Program (HANDP) funded since 2002, DARPA’s stated goal has been first to create novel concepts that will improve warfighter performance on the battlefield and second to improve prosthetic technology for severely injured veterans.


Neural Ensembles and the Neural Code

This is not the context to go into detail about the history of brain-machine interfaces, but I do want to point to several features of this work that have challenged some canonical assumptions about the brain and opened up new directions for thinking about the future relation of humans and machines in a coming merger of the virtual and the real.

First and foremost is the radical transformation introduced by the concept of neuronal ensemble recording, where populations of neurons are followed rather than single neurons as has been the case in traditional behavioral neuroscience. Up until the late 1980s single neuron recordings were the mainstay of neuroscience. In main part this approach was dictated by the measuring technology of the day. But during the past 25 years, the introduction of new electrophysiological and imaging methods has allowed neurophysiologists to measure the concurrent activity of progressively larger samples of single neurons in behaving animals. The shift in thinking about multi-electrode recording occurred in parallel with the development of Brain-Machine Interfaces. Instrumentation and new techniques of measurement have also transformed (are in the process of rewriting) what we know about brain physiology. Single neuron recording went hand-in-hand with the localization theory of brain function: the notion, treated as bedrock of the science by most neurophysiologists, that the cerebral cortex is divided into highly localized regions of visual, auditory, tactile, motor, olfactory and gustatory centers. These core areas were then subdivided into specialized regions for color, motion detection, face recognition, and other complex functions. Going even further, individual neurons have been labeled as visual neurons, mirror neurons, face neurons, touch neurons, and even “grandmother neurons” (Nicolelis, Beyond Boundaries, 46). Among the most cherished doctrines of this era of brain localization was the notion, based on discoveries by Vernon Mountcastle in 1955, that these highly localized somatosensory regions of the cortex are organized into neat columns. Mountcastle’s work appeared to establish that for a common receptive field location (e.g. the cat’s foreleg) cells were segregated into domains representing different sensory modalities. Mountcastle hypothesized there is an elementary unit of organization in the somatic cortex made up of a vertical group of cells extending through all the cellular layers. He termed this unit a ‘column’. By making multiple, closely spaced penetrations with his single neuron recording, Mountcastle concluded that individual columns are no more than 500 mm wide and intermingled in a mosaic-like fashion. These blocks of tissue contain neurons whose salient physiological properties are identical. (Reprised and reviewed in Mountcastle, 1997)

The advent of neural ensemble recording has called the existence of these columns into question and replaced the static architectonic picture of the brain grounded in fixed functional regions by a highly dynamic model of the brain that emphasizes spatiotemporal flows. In place of behaviors being localized to specific brain regions, the new model has a number of radically new features, including the following: 1) the representation of any behavioral parameter is distributed across many brain areas; 2) single neurons are insufficient for encoding a given parameter; 3) individual neurons do not have a one-to-one relationship to a particular motor parameter, but rather, a single neuron is informative of several behavioral parameters—individual neurons multitask; 4) a certain minimal threshold number of neurons in a population is needed for their information capacity to stabilize at a sufficiently high value; 5) the same behavior can be produced by different neuronal assemblies; and finally 6) the primacy of neural plasticity—neural ensemble function is crucially dependent on the capacity to plastically adapt to new behavioral tasks.

This neuronal ensemble perspective has been enabled by a new generation of recording devices in the form of multiple, arrayed microelectrodes (up to 400 in some experiments) that can be surgically implanted across several areas of the somatosensory cortex capable of simultaneously recording the firing of local populations of neurons in the vicinity of the electrodes. The Utah Intracortical Electrode Array developed by Maynard, Nordhausen, and Normann in the late 1990s was the core technology for enabling the first generation of brain machine interfaces. Additional crucial enabling elements have been the development of electronics for sampling, filtering and amplifying neural signals from the electrodes and fast computers and software for extracting meaningful patterns out of the storm of electrical pulses detected by the microarray recording devices. Using sophisticated data-mining techniques and algorithms from artificial neural networks scientist/neuroengineers such as Miguel Nicolelis are able to detect the neural codes for motor commands, such as controlled arm and hand motion, grasping, walking, and other sensorimotor actions.

These components form the basis of a Brain-Machine Interface. In their now classic experiments Nicolelis, John Chapin, and their team of graduate students and postdocs surgically inserted microwire recording arrays in six areas of the somatocortex of an owl monkey named Aurora (they have also worked with hooded rats and rhesus monkeys) who had been trained to play a videogame. Aurora operated a joystick that moved a circular cursor across a video screen in pursuit of a target. If she successfully got the target within a specified time period, she would be rewarded with a drink of her favorite juice. Once Aurora had been trained on this task the neural signals representing her arm, hand and wrist movements controlling the joystick were captured and converted to digital instructions for operating a robot arm. As Aurora would play the game, the robot arm controlling a second joystick would mirror the same movements as Aurora’s game play, gradually improving in accuracy as the experiment went on. Visual feedback allowed Aurora to see that her movements were being copied by the robot arm. After playing the game in this fashion for several days Nicolelis took away Aurora’s joystick and attached the cursor control to the wrist of the robot. Somewhat befuddled, Aurora sat for a while, and then after a few minutes began moving her arm as if the phantom joystick were there, while the robot arm completed the task and got Aurora her juice reward. Even more remarkable, after several experiments of this sort, Aurora realized that she didn’t need to move her arm at all but simply by imagining the movements she would make to capture the target the robot would do the trick for her. There have been a number of variations on these experiments, including having the robot arm located at MIT but visible via a television screen to Aurora back at Duke. This situation worked as long as the lag time did not exceed 250-300 milliseconds. Another spectacular demonstration of the brain machine interface involved a rhesus monkey walking on a treadmill. In a similar fashion the realtime capture of the monkey’s brain signals controlling the gait on the treadmill were converted to a program operating the legs of a robot in Tokyo visible on a video monitor. The monkey was rewarded for learning that her gate on the treadmill controlled the gait of the robot, speeding up, slowing down, and stopping based on her own gait. After performing this game for an hour the monkey’s treadmill was turned off, but she quickly realized that by imagining her own leg movements, she could control the Tokyo robot and receive her juice reward.

An interesting feature of these experiments was that as the animal shifted between normal and brain control mode (without moving its arms or legs) a subset of the recorded cortical neurons ceased to fire. Perhaps more surprisingly, a fraction of the recorded cortical neurons showed clear velocity and direction-tuning that was related to the movements of the robotic prosthesis but not to the displacement of the animal’s own arms. Such tuning developed and became sharper during the period in which monkeys learned to operate the BMI without execution of overt body movements (brain control mode). As animals shifted back and forth between using their own limbs or the artificial actuator controlled by the BMI to solve a particular motor task, functional coupling between pairs of cortical neurons adapted dynamically.

Nicolelis draws the important conclusion from this “that, at its limit, cortical plasticity may allow artificial tools to be incorporated as part of the multiple functional representations of the body that exist in the mammalian brain. If this proves to be true, we would predict that continuous use of a BMI should induce subjects to perceive artificial prosthetic devices, such as prosthetic arms and legs, controlled by a BMI as part of their own bodies. Such a prediction opens the intriguing possibility that the representation of self does not necessarily end at the limit of the body surface, but can be extended to incorporate artificial tools under the control of the subject’s brain. BMI research further stretches this puzzling idea by demonstrating that, once brain activity is recorded and decoded efficiently in real time, its capacity to control artificial devices can undergo considerable modification in terms of temporal, spatial, kinematic and kinetic characteristics, termed scaling. In other words, not only can a BMI enact voluntary motor outputs faster than the subject’s biological apparatus (temporal scaling), but it can also accomplish motor tasks at a distance from the subject’s own body (spatial scaling), by controlling an actuator that is either considerably smaller (for example, a nano-tool) or considerably larger (for example, a crane) than the subject’s own biological appendices. (Nicolelis, 2009, 535-536.)


Sharing Brain States

In a follow-on set of experiments the Nicolelis lab has experimented with transferring the brain state of an animal—in this case a hooded rat—to another rat through a direct brain-to-brain interface. In the experiment one rat is the “explorer” trained to use its facial whiskers to determine the diameter of an aperture in the dark. The goal of the experiment is to find the aperture that is the right size to let the rat through to get a reward. The “explorer” rats trained to do this in the Nicolelis experiment were successful more than 90 percent of the time in selecting the correct aperture and getting the reward within 150 milliseconds. In the next phase of the experiment a second rat that had also been trained in the tactile discrimination task is placed in a separate box, but it is not allowed to use its own whiskers to determine the width of the aperture and get the reward. Instead, the explorer rat’s brain activity is transmitted wirelessly to the second (decoder) rat. This decoder rat pokes its head in one of two spots on the wall indicating which aperture to select to get the reward, and it cannot use its own experience sensitive whiskers to make the choice but must select on the basis of the stimulus pattern it receives from the explorer rat. If the decoder rat selects the correct aperture, it is rewarded, and the explorer rat is given an extra bonus reward for successfully transmitting its perceptual experience to the decoder partner. The idea here is that the decoder rat cooperates virtually with the explorer rat and in fact expands its own body image to incorporate the whiskers of the explorer rat as if they were its own. More complicated versions of this experiment are also being attempted, including a brain interface involving an intermediary layer of rats in which rats trained in exploring different aspects of an environment or object are allowed to share their perceptions and form a consensus.

Optogenetic Mapping: Neurotechnology Renaissance

The techniques for recording neural ensembles developed by Nicolelis and discussed above are effective in decoding sensorimotor movements, and there are numerous medical applications for assisting paralyzed patients that can implement these methods. But they are not fine-grained enough to be able to map out the individual circuits involving thousands of neurons that encode a specific brain function, particularly higher cognitive functions. Problems of a similar nature are obstacles in the use of fMRI imaging—since fMRI relies on blood flow and oxygenation to particular areas of the brain, the results suffer from temporal lag—and EEG (electroencephalogram) methods. Recently a new and highly successful approach has been introduced, called optogenetic mapping. Developed by Karl Diesseroth and Ed Boyden in 2006 this method operates by using a light stimulus to modulate electrical activity of populations of cortical neurons. Through a piece of genetic engineering, cortical neurons can be made to express Channelrhodopsin-2. Blue light from a laser will open ChR-2’s sodium channel, triggering a massive influx of sodium ions into the neuron and making it fire an action potential. Conversely Boyden and his team discovered that by inserting the gene for expressing Halorhodopsin, another protein capable of light activation, and exposing the neuron to yellow light, it would stop firing. Here you had a pair of On-Off switches that were extremely precise and could be operated in a highly controlled manner in a volume of neurons one cubic millimeter by simply injecting a small amount of virus used for the transfection. By stimulating those cells with a laser, the researchers could control the activity of specific nerve circuits with millisecond precision and study the effects. They later discovered that by also inserting the gene for expressing Green Fluorescing Protein, GFP, it would serve to indicate that the neuron expressing Channelrhodopsin-2 has fired. By using different promoters, different cell types could be selected and studied. By switching on and off the blue and yellow laser light that could be passed to the tissue through microfiber optic cables, it could be determined which functional groups of cells are involved in a bodily action. These new methods using light to activate or silence specific neurons in the brain, are now being widely utilized by researchers to reveal insights into how to control neural circuits to achieve therapeutically useful changes in brain dynamics. According to Ed Boyden, “We are entering a neurotechnology renaissance, in which the toolbox for understanding the brain and engineering its functions is expanding in both scope and power at an unprecedented rate.” (Boyden, Brain Coprocessor) For Boyden and other neuroengineers the new tools for imaging and mapping brain circuits, such as those provided by optogenetics and two-photon microscopy, Diffusion Tensor Imaging and computer tractography are beginning to reveal principles governing how best to control a circuit—revealing the neural targets and control strategies that most efficaciously lead to a goal brain state or behavioral effect, and thus pointing the way to new therapeutic strategies and ultimately the development of implantable neuromorphic chips capable of intervening therapeutically into processes such as epilepsy or Parkinson’s disease. Miniature, implantable brain coprocessors, Boyden argues, might be able to support new kinds of personalized medicine, for example continuously adapting a neural control strategy to the goals, state, environment, and history of an individual patient; and in the not-distant future, the computational module of a brain coprocessor may be powerful enough to assist in high-level human cognition or complex decision-making.

Let me summarize the developments in Brain Machine Interfaces relevant to our interrogation of constructions of the future. First there have some important changes in how we understand the brain. Foremost is the emphasis on brain and neural plasticity. One of the key points in the discussion above is the ability of the brain to reshape the body schema to include new prosthetic devices such as robotic arms and legs operating over the internet as parts of the body. An astonishing feature of the Nicolelis experiments discussed above, for instance, is that as Aurora adjusts to operating the brain-machine interface by thought alone, not using her natural arm movements to operate the joystick, the neural firings in her brain adapt and optimize around controlling the robot arm. The ease and rapidity with which this happens is impressive, amazing really. Another feature I have wanted to emphasize is the point that through the BMIs we have presented, it is imaginable for two or more animals in the loop to share brain states as part of a collective, cooperative, agent mind. The imagination runs wild in thinking about possible scenarios of where this might lead in an internet-enabled ubiquitous computing environment. The final point we have made is that with new experimental techniques of optogenetics and new imaging modalities such as two-photon laser scanning microscopy, researchers are beginning to be able to map out the detailed circuitry not just of sensorimotor function but soon even higher-ordered cognitive functions central to mental activity. An example of this is the work of the David Tank Lab at Princeton on mapping the circuitry of the hippocampus in order to understand the dynamics of short term memory. The ability to intervene within, control, and possibly modify the functioning of specific neural circuits is just over the horizon. According to Edward Boyden (MIT), David Tank (Princeton), Karl Diesseroth (Stanford) and other neuroengineers, the era of brain coprocessors is within reach (for outstanding coverage of these rapid ongoing developments see the BrainWindows blog: http://brainwindows.wordpress.com/about/).


The discussion thus far has centered on brain-machine interfaces and future imagined brain coprocessors as therapeutic, rehabilitative tools and devices for brain reading and mind control for augmenting human mental abilities through fairly invasive surgical means. But some of the features of these imagined brain coprocessors may already be silently being installed through non-surgically invasive means. In the next sections I want to explore developments from the fields of ubiquitous computing, social media and marketing in progress that for all practical purposes are neurotechnologies of the future.


Ubiquitous Computing and Augmented Reality

The infrastructure of ubiquitous computing envisioned two decades ago by Mark Weiser and John Seely Brown offers the nutrient matrix for posthuman extended minds proposed by Andy Clark and the collective paraselves fantasized by Nicolelis’s Brain-Machine Interfaces and theorized beautifully in Brian Rotman’s discussion of paraselves. (Weiser: 1991, 1994; Weiser and Brown: 1996; Clark, 2004; Clark, 2010; Rotman, Becoming Beside Ourselves, 2008): namely, a world in which computation would disappear from the desktop and merge with the objects and surfaces of our ambient environment. (Greenfield: 2006) Rather than taking work to a desktop computer, many tiny computing devices would be spread throughout the environment, in computationally enhanced walls, floors, pens and desks seamlessly integrated into everyday life. We are still far from realizing Weiser’s vision of computing for the twenty-first century. Apart from the fact that nearly every piece of technology we use has one or more processors in it, we are far from reaching the transition point to ubiquitous computing when the majority of those processors are networked and addressable. But we are getting there. There have already been a number of milestones along the road to ubiquitous computing. Inspired by efforts from 1989-1995 at Olivetti and Xerox PARC to develop invisible interfaces interlinking coworkers with electronic badges and early RFID tags (Want, 1992,1995,1999), the Hewlett Packard Cooltown project (2000-2005) offered a prototype architecture for linking everyday physical objects to Web pages by tagging them with infrared beacons, RFID tags, and bar codes. Users carrying PDAs, tablets, and other mobile devices could read those tags to view Web pages about the world around them and engage services, such as printers, radios, automatic call forwarding and continually updated maps for finding like-minded colleagues in locations such as conference settings. (Barton, 2001; Kindberg, 2002)

While systematically constructed ubiquitous cities based on the Cooltown model have yet to take hold, many of the enabling features of ubiquitous computing environments are arising in ad hoc fashion fuelled primarily by growing mass consumption worldwide of social networking applications and the wildly popular new generation smart phones with advanced computing capabilities, cameras, accelerometers, and a variety of readers and sensors. In response to this trend and building on a decade of Japanese experience with Quick Response (QR) barcodes, in December 2009 Google dispatched approximately 200,000 stickers with bar codes for the windows of its “Favorite Places” in the US, so that people can use their smart phones to find out about them. Besides such consumer-oriented uses, companies like Wal-Mart and other global retailers now routinely use RFID tags to manage industrial supply chains. These practices are now indispensable for hospital and other medical environments. Such examples are the tip of the iceberg of increasingly pervasive computing applications for the masses. Consumer demand for electronically mediated pervasive “brand zones” such as Apple Stores, Prada Epicenters, and the interior of your BMW where movement, symbols, sound, and smell all reinforce the brand message turning shopping spaces/driving experiences into engineered synesthetic environments are powerful aphrodisiacs for pervasive computing.

Even these pathbreaking developments fall short of Weiser’s vision which was to engage multiple computational devices and systems simultaneously during ordinary activities without having to interact with a computer through mouse, keyboard and desktop monitor and without necessarily being aware of doing so. In the years since these first experimental systems rapid advances have taken place in mobile computing, including: new smart materials capable of supporting small, lightweight, wearable mobile cameras and communications devices; many varieties of sensor technologies; RFID tags; physical storage on “motes” or “mu-chips”, such as HP’s Memory Spot system which permits storage of large media files on tiny chips instantly accessible by a PDA (McDonnell, 2010); Bluetooth; numerous sorts of GIS applications for location logging (eg., Sony’s PlaceEngine and LifeTagging system); wearable biometric sensors (eg., BodyMedia, SenseWear). To realize Weiser’s vision though, we must further augment these sorts of breakthroughs by getting the attention-grabbing gadgets, smart phones and tablets out of our hands and begin interacting within computer-mediated environments the way we normally do with other persons and things. Here, too, recent advancements have been enormous, particularly advances in gesture and voice recognition technologies coupled with new forms of tangible interface and information displays. (Rekimoto, 2008)

Two prominent examples are the stunning gesture recognition capabilities in the Microsoft Kinect system for the Xbox, which dispenses with a game controller altogether in favor of gesture recognition as game interface and the EPOC headset brain controller system from Emotive Systems. But for our purposes in exploring some of the current routes to neuromarketing and the emergence of a brain coprocessor, the SixthSense prototype developed by Pranav Mistry and Pattie Maes at MIT points even more dramatically to an untethered fusion of the virtual and the real central to Weiser’s vision. (Mistry, 2009a; 2009b; 2009c) The SixthSense prototype comprises a pocket projector, a mirror and a camera built into a small mobile wearable device. Both the projector and the camera are connected to a mobile computing device in the user’s pocket. The camera recognizes objects instantly, with the micro-projector overlaying the information on any surface, including the object itself or the user’s hand. Then the user can access or manipulate the information using his/her fingers. The movements and arrangements of markers on the user’s hands and fingers are interpreted into gestures that activate instructions for a wide variety of applications projected as application interfaces—search, video, social networking, basically the entire Web. SixthSense also supports multi-touch and multi-user interaction.

Insert Figure 1: a, b, c about here.
Figure 1: Pranav Mistry and Pattie Maes, SixthSense. The system comprises a pocket projector, a mirror and a camera built into a wearable device connected to a mobile computing platform in the user’s pocket. The camera recognizes objects instantly, with the micro-projector overlaying the information on any surface, including the object itself or the user’s hand. (Photos courtesy of Pranav Mistry)




a. Active phone keyboard overlayed on user’s hand. b. Camera recognizes flight coupon and projects departure update on the ticket c. Camera recognizes news story from web and streams video to the page.


Thus far we have emphasized technologies that are enabling the rise of pervasive computing, but ‘ubiquitous computing’ not only denotes a technical thrust; it is equally a socio-cultural formation, an imaginary and a source of desire. From our perspective its power becomes transformative in permeating the affective domain, the machinic unconscious. Perhaps the most significant development driving this reconfiguration of affect are the phenomena of social networking and the use of “smart phones.” More people are not only spending more time online; they are seeking to do it together with other wired “friends.” Surveys by the Pew Internet & American Life Project report that between 2005-2008 use of social networking sites by online American adults 18 and older quadrupled from 8% to 46% and that 65% of teens 12-17 used social networking sites like Facebook, MySpace, or LinkedIn. The Neilsen Company reports that 22 % of all time spent online is devoted to social network sites. (NeilsenWire, June 15) Moreover, the new internet generation wants to connect up in order to share: the Pew Internet & American Life Project has found that 64% of online teens ages 12-17 have participated in a wide range of content-creating and sharing activities on the internet, 39% of online teens share their own artistic creations online, such as artwork, photos, stories, or videos, while 26% remix content they find online into their own creations.(Lenhart, 2010, “Social Media”) The desire to share is not limited to text and video, but is extending to data-sharing of all sorts. Sleep, exercise, sex, food, mood, location, alertness, productivity, even spiritual well-being are being tracked and measured, shared and displayed. On MedHelp, one of the largest Internet forums for health information, more than 30,000 new personal tracking projects are started by users every month. Foursquare, a geo-tracking application with about one million users, keeps a running tally of how many times players “check in” at every locale, automatically building a detailed diary of movements and habits; many users publish these data widely. (Wolf, 2010) Indeed, 60% of internet users are not concerned about the amount of information available about them online, and 61% of online adults do not take steps to limit that information. Just 38% say they have taken steps to limit the amount of online information that is available about them. (Madden: 2007, 4) As Kevin Kelly points out we are witnessing a feedback loop between new technologies and the creation of desire. The explosive development of mobile, wireless communications, widespread use of RFID tags, Bluetooth, embedded sensors, QR addressing, applications like Shazam for snatching a link and downloading music in your ambient environment, GIS applications of all sorts, social phones such as numerous types of Android phones and the iPhone4 that emphasize social networking are creating desire for open sharing, collaboration, even communalism, and above all a new kind of mind. (Kelly: 2009a, 2009b) [See Figure 2: a, b, c]


Insert Figure 2: a, b, c about here]
Figure 2: 3D mapping, location aware applications, and augmented reality browsers.
(Photos courtesy of Earthmine.com and Layer.com)





a. Earthmine attaches location aware apps (in this case streaming video) to specific real-world locations. b. Earthmine enables 3D objects to be overlaid on specific locations. c. Layar augmented reality browser overlays information, graphics, and animation on specific locations.



The Affective Turn, Emotional Branding and Neuromarketing

A number of critical theorists, including Deleuze and Guattari, Brian Massumi, Bernard Stiegler, Patricia Clough, and more recently Hardt and Negri have observed that under globalization capitalism has shifted from production and consumption to focusing on the economic circulation of pre-individual bodily capacities or affects in the domain of biopolitical control. At the same moment these scholars were urging us to pay heed to the affective turn and the radicalizing shift taking place in capitalism, marketing theorists and “mad men” were similarly becoming sensitized to the shifts taking place in capitalism. At the end of the 1990s major marketing gurus, such as Marc Gobé, pointed out to their marketing colleagues that the world is clearly moving from an industrially driven economy toward a people-driven economy that puts the consumer in the seat of power; and as we are all becoming painfully aware, over the past fifty years the economic base has shifted from production to consumption. Marketers have embraced the challenges of this new reality with new strategies. Gobé pointed out that what used to be straightforward functional ideas, such as computers, have morphed from “technology equipment” into larger, consumer-focused concepts such as “lifestyle entertainment.” Food is no longer about cooking or chores but about home/lifestyle design and “sensory experiences;” Even before neuroscience entered the marketing scene Gobé and his marketing colleagues were coming to terms with what the theorists I have named above have called the immaterial labor of affect. Gobé called the visionary approach he was offering to address the new capitalist realities “Emotional Branding,” but what he had in mind was more than simple emotion and closer to what we call affect. “By emotional,” Gobé meant, “how a brand engages consumers on the level of the senses and emotions; how a brand comes to life for people and forges a deeper, lasting connection... It focuses on the most compelling aspect of the human character; the desire to transcend material satisfaction, and experience emotional fulfillment. A brand is uniquely situated to achieve this because it can tap into the aspirational drives which underlie human motivation.”

Nearly every academic discipline from art history and visual studies to critical theory and recently even the bastions of economics have been moved in one way or another to get on the affective bandwagon. Recent neuroscience points to an entirely new set of constructs underlying economic decision-making. The standard economic theory of constrained utility maximization is most naturally interpreted either as the result of learning based on consumption experiences, or careful deliberation—a balancing of the costs and benefits of different options—as might characterize complex decisions like planning for retirement, buying a house, or hammering out a contract. While not denying that deliberation is part of human decision making, neuroscience points out two generic inadequacies of this approach—its inability to handle the crucial roles of automatic and emotional processing.

A body of empirical research spanning the past fifteen years, too large to discuss here, has documented the range and extent of complex psychological functions that can transpire automatically, triggered by environmental events and without an intervening act of conscious will or subsequent conscious guidance.(Bargh, 1999; 2000; Hassin, 2005) First, much of the brain implements “automatic” processes, which are faster than conscious deliberations and which occur with little or no awareness or feeling of effort (John Bargh et al. 1996; Bargh and Tanya Chartrand 1999). Because people have little or no introspective access to these processes, or volitional control over them, and these processes were evolved to solve problems of evolutionary importance rather than respect logical dicta, the behavior these processes generate need not follow normative axioms of inference and choice. Second, our behavior is strongly influenced by finely tuned affective (emotion) systems whose basic design is common to humans and many animals (Joseph LeDoux 1996; Jaak Panksepp 1998; Edmund Rolls 1999). These systems are essential for daily functioning, and when they are damaged or perturbed, by brain injury, stress, imbalances in neurotransmitters, or the “heat of the moment,” the logical-deliberative system— even if completely intact—cannot regulate behavior appropriately. Human behavior thus requires a fluid interaction between controlled and automatic processes, and between cognitive and affective systems. A number of studies by Damasio and his colleagues have shown that deliberative action cannot take place in the absence of affective systems. However, many behaviors that emerge from this interplay are routinely and falsely interpreted as being the product of cognitive deliberation alone (George Wolford, Michael Miller, and Michael Gazzaniga 2000). These results suggest that introspective accounts of the basis for choice should be taken with a grain of salt. Because automatic processes are designed to keep behavior “off-line” and below consciousness, we have far more introspective access to controlled than to automatic processes. Since we see only the top of the automatic iceberg, we naturally tend to exaggerate the importance of control. Taking these findings onboard, a growing vanguard of “neuroeconomists” are arguing that economic theory ought to take the findings of neuroscience and neuromarketing seriously.(Perrachione and Perrachione, “Brains and Brands” 2008)

But even in advance of engineering solutions to building neurochips and neuro-coprocessors a burgeoning “adfotainment-industrial complex” is emerging that marries an applied science of affect with media and brand analysis. Among the most successful entrants in this field are MindSign Neuromarketing, a San Diego firm that engages media and game companies to fine-tune their products through the company’s techniques of “neurocinema,” the real-time monitoring of the brain’s reaction to movies by using fMRI technology, eye-tracking, galvanic skin response and other scanning techniques to monitor the amygdale while test subjects watch a movie or play a game. MindSign examines subject brain response “to your ad, game, speech, or film. We look at how well and how often it engages the areas for attention/emotion/memory/and personal meaning (importance).” MindSign cofounder Philip Carlsen said in an NPR interview that he foresees a future where directors send their dailies (raw footage fresh from the set) to the MRI lab for optimization. “You can actually make your movie more activating,” he said, “based on subjects’ brains. We can show you how your product is affecting the consumer brain even before the consumer is able to say anything about it.” The leaders in this adfotainment-industrial complex are not building on pseudoscience but have close connections to major neuroscience labs and employ some of the leading researchers of the neuroscience of affect on their teams. NeuroFocus, located in Berkeley, California, was founded by UC Berkeley-trained engineer, Dr. A.K. Pradeep and has a team of scientists working with the firm that includes Robert T. Knight, the director of the Helen Willis Neuroscience Institute at UC Berkeley. NeuroFocus was recently acquired by the powerful Nielsen Company.

I want to consider the convergence of these powerful tools of neuro-analysis and media in light of what some theorists have considered the potential of our increasing symbiosis with media technology for reconfiguring the human. Our new collective minds are deeply rooted in an emerging corporeal axiomatic, the domain identified by Felix Guattari as the machinic unconscious and elaborated by Patricia Clough as a “teletechnological machinic unconscious”(Clough, Autoaffection, 2000)—a wide range of media ecologies, material practices, social apparatuses for encoding and enforcing ways of behaving through routines, patterns of movement and gestures, as well as haptic and even neurological patterning/re-patterning that facilitate specific behaviors and modes of action.(Guattari, 2009) In this model technological media are conjoined with unconscious and preconscious cognitive activity to constitute subjects in particular, medium-specific directions.

The affective domain is being reshaped by electronic media. Core elements of the domain of affect are unconscious social signals, primarily consisting of body language, facial expressions, and tone of voice. These social signals are not just a complement to conscious language; they form a separate communication network that influences behavior, and can provide a window into our intentions, goals, and values. Much contemporary research in cognitive science and other areas of social psychology is reaffirming that humans are intensely social animals and that our behavior is much more a function of our social networks than anyone has previously imagined. The social circuits formed by the back-and-forth pattern of unconscious signaling between people shapes much of our behavior in families, work groups and larger organizations. (Pentland, 2007, “Collective Nature of Human Intelligence”) By paying careful attention to the patterns of signaling within a social network, Pentland and others are demonstrating that it is possible to harvest tacit knowledge that is spread across the network’s individuals. While our hominid ancestors communicated face-to-face through voice, face, and hand gestures, our communications today are increasingly electronically mediated, our social groups dispersed and distributed. But this does not mean that affect has disappeared or somehow been stripped away. On the contrary, as the “glue” of social life, affect is present in the electronic social signals that link us together. The domain of affect is embedded within and deeply intertwined with these pervasive computing networks. The question is, as we become more socially interlinked than ever through electronic media can the domain of affect be accessed, measured, perhaps understood and possibly manipulated for better or worse?

A number of researchers are developing systems to access, record and map the domain of affect, including a suite of applications by Sony Interaction Laboratory director Jun Rekimoto (Rekimoto, 2006; 2007a; 2007b; 2010) such as the Affect Phone and a LifeLogging system coupled with an augmented reality and a multiperson awareness medium for connecting distant friends and family developed by Pattie Maes’ group at MIT. For the past five years Sandy Pentland and his students at the MIT Media Lab have been working on what they call a socioscope for accessing the affective domain in order to make new social networked media smarter by analyzing prosody, gesture, and social context. The socioscope consists of three main parts: “smart” phones programmed to keep track of their owners’ locations and their proximity to other people by sensing cell tower and Bluetooth IDs; electronic badges that record the wearers’ locations, ambient audio, and upper body movement via a two-dimensional accelerometer; and a microphone with body-worn camera to record the wearers’ context, and software that is used to extract audio “signals”, specifically, the exact timing of individuals’ vocalizations and the amount of modulation (in both pitch and amplitude) of those vocalizations. Unlike most speech or gesture research, the goal is to measure and classify speaker interaction rather than trying to puzzle out the speakers’ meanings or intentions.

One implementation of this technology is the Serendipity system, which is implemented on Bluetooth-enabled mobile phones and built on BlueAware, an application that scans for other Bluetooth devices in the user’s proximity. (Eagle, 2005) When Serendipity discovers a new device nearby, it automatically sends a message to a social gateway server with the discovered device’s ID. If it finds a match, it sends a customized picture message to each user, introducing them to one another. The phone extracts the social signaling features as a background process so that it can provide feedback to the user about how that person sounded and to build a profile of the interactions the user had with the other person. The power of this system is that it can be used to create, verify, and better characterize relationships in online social network systems, such as Facebook, MySpace, and LinkedIn. A commercial application of this technology is Citysense, which acquires millions of data points to analyze aggregate human behavior and to develop a live map of city activity, learns about where each user likes to spend time and processes the movements of other users with similar patterns. Citysense displays not only "where is everyone right now" on the user’s PDA but "where is everyone like me right now." (Sense Networks, 2008)

There are a number of implications of this technology for quantifying the machinic unconscious of social signals. Enabling machines to know social context will enhance many forms of socially aware communication, and indeed, the idea is to overcome some of the major drawbacks in our current use of computationally mediated forms of communication. For example, having a quantifiable model of social context will permit the mapping of group structures, information flows, identification of enabling nodes and bottlenecks, and provide feedback on group interactions: Did you sound forceful during a negotiation? Did you sound interested when you were talking to your spouse? Did you sound like a good team member during the teleconference?


I want to close these reflections by pointing to two newly introduced technologies that build upon the some of the same data-mining techniques for creating profiles discussed in Pentland’s CitySense programs. Of these final two, the least invasive new technology I want to highlight is Streetline, a company that realizes many of the innovations first experimented with in Cooltown and incorporates low power mesh technologies first developed in the MOTES project at Berkeley in the late 1990s. Streetline, a San Francisco-based tech firm, was selected as the winner by the IBM Global Entrepreneurship Program's SmartCamp 2010 for developing the free Parker app which not only shows you where parking meters are located, but also shows you which meters are available. Forget circling a five-block radius waiting for a spot to appear. With this app (available for iPhone and Android) you can pinpoint and snag that elusive space. Streetline captures data using self-powered motes, sensors mounted in the ground at each parking space, which can detect whether or not a space is vacant. The Parker app uses your smartphone's location sensors to know where you are and highlight local parking spots. It also uses the large screen (in your car for instance) to display a dynamic map of the nearest spots (rather than just display a list of street addresses). The parking meter data from the sensors is transmitted across ultra-low power mesh networks to Streetline servers which build a real-time picture of which parking meters are vacant. This information can be shared with drivers through the Parker app, and also with city officials, operators and policy managers. The app even goes further: once you park, the app uses this information to provide walking directions back to your vehicle and can record how much time you have on the meter and alert you when time is getting short. This is a truly cool app.
But this app is on a spectrum of technologies that use cell-phone data to track and trace your location. A more disturbing surveillance-use of new media technology combined with data-mining and profiling tools is Immersive Labs of New York, which uses webcams embedded in billboards and display systems in public areas, such as Times Square, an airport, or theme park, to grab footage of passers-by for facial recognition tools to measure the impact of an ad running on the screen. In this application artificial Intelligence software makes existing digital signs smarter, sequences ads, and pushes media to persons in front of the screen. Immersive Labs software makes real-time decisions on what ads to display based on current weather, gender, age, crowd, and attention time of the audience. The technology can adapt to multiple environments and ads on a single screen and works with both individuals and large groups. Using a standard web cam connected to any existing digital screen to determine age, gender, attention time and automatically schedule targeted advertising content. The software calculates the probability of success for each advertisement and makes real-time decisions of what ad should play next. The analytics report on ad performance and demographics (e.g., gender, age, distance, attention time, dwell time, gazes). The company claims not to store the images of individuals it has analyzed but immediately discards them after the interaction—we’re not so sure.


Conclusion

Brian Rotman and Brian Massumi are both optimistic about what access to the affective domain might occasion for our emerging posthuman communal mind. For Massumi, better grasping the domain of affect will provide a basis for resistance and counter tactics to the political-cultural functioning of the media.(Massumi, 43-44) For Rotman the grammaticalization of gesture holds the prospect of a new order of body mediation opening it to other desires and other semiotics. Pentland is equally optimistic. But his reflections on what quantification of the affective domain may offer sound more like a recipe for assimilation than resistance. Pentland writes:


By designing systems that are aware of human social signaling, and that adapt themselves to human social context, we may be able to remove the medium’s message and replace it with the traditional messaging of face-to-face communication. Just as computers are disappearing into clothing and walls, the otherness of communications technology might disappear as well, leaving us with organizations that are not only more efficient, but that also better balance our formal, informal, and personal lives. Assimilation into the Borg Collective might be inevitable, but we can still make it a more human place to live. (2005, 39)


Computer scientist/novelist Vernor Vinge first outlined the notion that humans and intelligent machines are headed toward convergence, which he predicted would occur by 2030. (Vinge, 1993 Vinge also predicted a stage en route to the Singularity where networked, embedded, and location-aware microprocessors provide the basis for a global panopticon. (Vinge, 2000; Wallace, 2006) Vinge has remained steadfastly positive about the possibilities presaged in this era: “...collaborations will thrive. Remote helping flourishes; wherever you go, local experts can make you as effective as a native. We experiment with a thousand new forms of teamwork and intimacy.” (Vinge, 2000) Such systems are not only on the immediate horizon; they are patented and commercially available in the prototypes coming from the labs and companies founded by scientists such as Pentland, Maes and Rekimoto, each of whom is emphatic about the need to implement and insure privacy in the potentially panoptic systems they have developed. (Sense Networks, “Principles”). We need not fear the singularity; but beware the panopticon.