Neurofutures

Neurofutures: Brain-Machine Interfaces and Collective Minds.
The Affective Turn and the New, New Media.

Tim Lenoir
Duke University

Introduction


The first practical steps of augmenting human capability through a close coupling of man and machine have their origins in Ivan Sutherland’s work at MIT and the University of Utah and in work by the generation of students Sutherland and his colleague, David Evans, trained at the University of Utah. Having launched the field of interactive computer-aided design in his dissertation project, Sketchpad, between 1965-1968 Sutherland pursued an ambitious project to create what he called “the ultimate display,” an augmented reality system in which computer generated images of all sorts could be overlaid on scenes viewed through a head-mounted camera display system. Among the visionary suggestions Sutherland made in this early work was that interaction with the computer need not be based on keyboard or joystick linkages but could be controlled through computer-based sensing of the positions of almost any of our body muscles; and going further, he noted that while gestural control through hands and arms were obvious choices, machines to sense and interpret eye motion data could and would be built. “An interesting experiment, he claimed, “will be to make the display presentation depend on where we look.” Sutherland’s work inspired Scott Fisher, Brenda Laurel, and Jaron Lanier, the inventors of the dataglove and first virtual reality and telepresence systems at NASA-Ames Research Center, and Tom Furness at Wright-Patterson Air Force Base in Ohio, who developed his own version of the ultimate display, based on eye and gesture tracking as a quasi “Darth-Vader Helmet” and integrated virtual cockpit. Furness was trying to solve problems of how humans interact with very complex machines, particularly the new high-tech F-16, F-14 and F-18 fighter planes, which were becoming so complicated that the amount of information a fighter pilot had to assimilate from the cockpit's instruments and command communications had become overwhelming. Furness’ solution was a cockpit that fed 3-D sensory information directly to the pilot, who could then fly by nodding and pointing his way through a simulated landscape below.

These pathbreaking projects on augmented and virtual reality, and telepresence controlled by gesture and eye-tracking systems inspired a number of visionary efforts over the next generation to go all the way in creating the ultimate display by eliminating the screen and tethered systems depicted above altogether by directly interfacing brains and machines. In what follows I will trace lines of synergy and convergence among several areas of neuroscience, genetics, engineering, and computational media that have given rise to brain/computer/machine interfaces that may at first glance seem like the stuff of science fiction or the techno-enthusiast predictions of Singularians and Transhumanists but may be closer than you think to being realized and quite possibly transforming human being as we know it in radical ways. I begin with work in brain-machine interfaces currently used in therapeutic neuroprosthetics emanating from the pioneering work of the Utah Intracortical Electrode Array, engage with the visionary speculations neuroengineers such as Miguel Nicolelis at Duke on their future deployment in ubiquitous computing networks, and contemplate the implications of these prospective developments for reconfigured selves. The second area I will explore is the convergence of work in the cognitive neurosciences on the massive role of affect in decision making and the leveraging of next-generation social media and smart devices as the “brain-machine” interfaces for measuring, data mining, modeling, and mapping affect in strategies to empower individuals to be more efficient, productive, and satisfied members of human collectives. If these speculations have merit, we may want to invest in “neurofutures”—very soon. (More: Brain-Machine Interfaces)


Conclusion

Brian Rotman and Brian Massumi are both optimistic about what access to the affective domain might occasion for our emerging posthuman communal mind. For Massumi, better grasping the domain of affect will provide a basis for resistance and counter tactics to the political-cultural functioning of the media.(Massumi, 43-44) For Rotman the grammaticalization of gesture holds the prospect of a new order of body mediation opening it to other desires and other semiotics. Pentland is equally optimistic. But his reflections on what quantification of the affective domain may offer sound more like a recipe for assimilation than resistance. Pentland writes:


By designing systems that are aware of human social signaling, and that adapt themselves to human social context, we may be able to remove the medium’s message and replace it with the traditional messaging of face-to-face communication. Just as computers are disappearing into clothing and walls, the otherness of communications technology might disappear as well, leaving us with organizations that are not only more efficient, but that also better balance our formal, informal, and personal lives. Assimilation into the Borg Collective might be inevitable, but we can still make it a more human place to live. (2005, 39)


Computer scientist/novelist Vernor Vinge first outlined the notion that humans and intelligent machines are headed toward convergence, which he predicted would occur by 2030. (Vinge, 1993 Vinge also predicted a stage en route to the Singularity where networked, embedded, and location-aware microprocessors provide the basis for a global panopticon. (Vinge, 2000; Wallace, 2006) Vinge has remained steadfastly positive about the possibilities presaged in this era: “...collaborations will thrive. Remote helping flourishes; wherever you go, local experts can make you as effective as a native. We experiment with a thousand new forms of teamwork and intimacy.” (Vinge, 2000) Such systems are not only on the immediate horizon; they are patented and commercially available in the prototypes coming from the labs and companies founded by scientists such as Pentland, Maes and Rekimoto, each of whom is emphatic about the need to implement and insure privacy in the potentially panoptic systems they have developed. (Sense Networks, “Principles”). We need not fear the singularity; but beware the panopticon.