Neurofutures: Difference between revisions

No edit summary
No edit summary
 
(60 intermediate revisions by 6 users not shown)
Line 1: Line 1:
=== Neurofutures: Brain-Machine Interfaces and Collective Minds. <br>''The Affective Turn and the New, New Media.''  ===
[[Image:Neurofutures4.jpg|right|318x450px|Neurofutures4.jpg]]


Tim Lenoir<br>Duke University
Brain-Machine Interfaces and Collective Minds


=== Introduction<br>  ===
[http://www.livingbooksaboutlife.org/books/ISBN_Numbers ISBN: 978-1-60785-268-1]


<br>The first practical steps of augmenting human capability through a close coupling of man and machine have their origins in Ivan Sutherland’s work at MIT and the University of Utah and in work by the generation of students Sutherland and his colleague, David Evans, trained at the University of Utah. Having launched the field of interactive computer-aided design in his dissertation project, Sketchpad, between 1965-1968 Sutherland pursued an ambitious project to create what he called “the ultimate display,” an augmented reality system in which computer generated images of all sorts could be overlaid on scenes viewed through a head-mounted camera display system. Among the visionary suggestions Sutherland made in this early work was that interaction with the computer need not be based on keyboard or joystick linkages but could be controlled through computer-based sensing of the positions of almost any of our body muscles; and going further, he noted that while gestural control through hands and arms were obvious choices, machines to sense and interpret eye motion data could and would be built. “An interesting experiment, he claimed, “will be to make the display presentation depend on where we look.” Sutherland’s work inspired Scott Fisher, Brenda Laurel, and Jaron Lanier, the inventors of the dataglove and first virtual reality and telepresence systems at NASA-Ames Research Center, and Tom Furness at Wright-Patterson Air Force Base in Ohio, who developed his own version of the ultimate display, based on eye and gesture tracking as a quasi “Darth-Vader Helmet” and integrated virtual cockpit. Furness was trying to solve problems of how humans interact with very complex machines, particularly the new high-tech F-16, F-14 and F-18 fighter planes, which were becoming so complicated that the amount of information a fighter pilot had to assimilate from the cockpit's instruments and command communications had become overwhelming. Furness’ solution was a cockpit that fed 3-D sensory information directly to the pilot, who could then fly by nodding and pointing his way through a simulated landscape below.  
''edited by'' [http://www.livingbooksaboutlife.org/books/Neurofutures/bio Tim Lenoir]
__TOC__
== [http://www.livingbooksaboutlife.org/books/Neurofutures/Introduction Introduction]  ==


These pathbreaking projects on augmented and virtual reality, and telepresence controlled by gesture and eye-tracking systems inspired a number of visionary efforts over the next generation to go all the way in creating the ultimate display by eliminating the screen and tethered systems depicted above altogether by directly interfacing brains and machines. In what follows I will trace lines of synergy and convergence among several areas of neuroscience, genetics, engineering, and computational media that have given rise to brain/computer/machine interfaces that may at first glance seem like the stuff of science fiction or the techno-enthusiast predictions of Singularians and Transhumanists but may be closer than you think to being realized and quite possibly transforming human being as we know it in radical ways. I begin with work in brain-machine interfaces currently used in therapeutic neuroprosthetics emanating from the pioneering work of the Utah Intracortical Electrode Array, engage with the visionary speculations neuroengineers such as Miguel Nicolelis at Duke on their future deployment in ubiquitous computing networks, and contemplate the implications of these prospective developments for reconfigured selves. The second area I will explore is the convergence of work in the cognitive neurosciences on the massive role of affect in decision making and the leveraging of next-generation social media and smart devices as the “brain-machine” interfaces for measuring, data mining, modeling, and mapping affect in strategies to empower individuals to be more efficient, productive, and satisfied members of human collectives. If these speculations have merit, we may want to invest in “neurofutures”—very soon. (More: [http://www.livingbooksaboutlife.org/books/Neuroengineering/Brain-Machine_Interfaces Brain-Machine Interfaces])<br>
The first practical steps of augmenting human capability through a close coupling of man and machine have their origins in Ivan Sutherland’s work at MIT and the University of Utah and in work by the generation of students Sutherland and his colleague, David Evans, trained at the University of Utah. Having launched the field of interactive computer-aided design in his dissertation project, Sketchpad, between 1965-1968 Sutherland pursued an ambitious project to create what he called “[http://Citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.136.3720&rep=rep1&type=pdf the ultimate display],” an augmented reality system in which computer generated images of all sorts could be overlaid on scenes viewed through a head-mounted camera display system. Among the visionary suggestions Sutherland made in this early work was that interaction with the computer need not be based on keyboard or joystick linkages but could be controlled through computer-based sensing of the positions of almost any of our body muscles; and going further, he noted that while gestural control through hands and arms were obvious choices, machines to sense and interpret eye motion data could and would be built. “An interesting experiment, he claimed, “will be to make the display presentation depend on where we look.” Sutherland’s work inspired Scott Fisher, Brenda Laurel, and Jaron Lanier, the inventors of the dataglove and first virtual reality and telepresence systems at NASA-Ames Research Center, and Tom Furness at Wright-Patterson Air Force Base in Ohio, who developed his own version of the ultimate display, based on eye and gesture tracking as a quasi “Darth-Vader Helmet” and integrated virtual cockpit. Furness was trying to solve problems of how humans interact with very complex machines, particularly the new high-tech F-16, F-14 and F-18 fighter planes, which were becoming so complicated that the amount of information a fighter pilot had to assimilate from the cockpit's instruments and command communications had become overwhelming. Furness’ solution was a cockpit that fed 3-D sensory information directly to the pilot, who could then fly by nodding and pointing his way through a simulated landscape below. [http://www.livingbooksaboutlife.org/books/Neurofutures/Introduction (more...)]  


<br>
== Readings==
; Vernon B. Mountcastle : [http://brain.oxfordjournals.org/content/120/4/701.long The Columnar Organization of the Neocortex]
; Jonathan C. Horton and Daniel L. Adams : [http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1569491/?tool=pubmed The Cortical Column: A Structure Without a Function]
; John A. Bargh, Tanya L. Chartrand : [http://www.yale.edu/acmelab/articles/bargh_chartrand_1999.pdf The Unbearable Automaticity of Being]
; Miguel A. L. Nicolelis, Asif A. Ghazanfar, Barbara M. Faggin, Scott Votaw, Laura M. O. Oliveira : [http://www.princeton.edu/~asifg/old/pdfs/ReconstructingtheEngram-Nicolelis%20et%20al..pdf Reconstructing the Engram: Simultaneous, Multisite, Many Single Neuron Recordings]
; John K. Chapin, Karen A. Moxon, Ronald S. Markowitz, Miguel A. L. Nicolelis : [http://www.neuro-it.net/pdf_dateien/summer_2004/Chapin%201999.pdf Real-time Control of a Robot Arm Using Simultaneously Recorded Neurons in the Motor Cortex]
; Jose M. Carmena, Mikhail A. Lebedev, Roy E. Crist, Joseph E. O'Doherty, David M. Santucci, Dragan F. Dimitrov, Parag G. Patil, Craig S. Henriquez, Miguel A. L. Nicolelis : [http://www.plosbiology.org/article/info:doi/10.1371/journal.pbio.0000042 Learning to Control a Brain–Machine Interface for Reaching and Grasping by Primates]
; Benjamin Blankertz, Michael Tangermann, Carmen Vidaurre, Siamac Fazli1, Claudia Sannelli, Stefan Haufe, Cecilia Maeder, Lenny Ramsey, Irene Sturm, Gabriel Curio, Klaus-Robert Müller : [http://www.frontiersin.org/neuroprosthetics/10.3389/fnins.2010.00198/full The Berlin Brain–Computer Interface: Non-Medical Uses of BCI Technology]
; Karl Deisseroth, Guoping Feng, Ania K. Majewska, Gero Miesenböck, Alice Ting, Mark J. Schnitzer : [http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2820367/ Next-Generation Optical Technologies for Illuminating Genetically Targeted Brain Circuits]
; Edward S. Boyden : [http://f1000.com/reports/b/3/11 A History of Optogenetics: The Development of Tools for Controlling Brain Circuits with Light]
; Olaf Sporns, Giulio Tononi, Rolf Kotter : [http://jhfc.duke.edu/jenkins/pubshare/LivingBooks_UploadFiles/21_Sporns_HumanConnectome_2005.pdf The Human Connectome: A Structural Description of the Human Brain]


===  ===
== [http://www.livingbooksaboutlife.org/books/Neurofutures/Attributions Attributions] ==


<br> <br>
== A 'Frozen' PDF Version of this Living Book ==
 
; [http://livingbooksaboutlife.org/pdfs/bookarchive/Neurofutures.pdf Download a 'frozen' PDF version of this book as it appeared on 7th October 2011]
=== <br> ===
 
<br>
 
=== Conclusion  ===
 
Brian Rotman and Brian Massumi are both optimistic about what access to the affective domain might occasion for our emerging posthuman communal mind. For Massumi, better grasping the domain of affect will provide a basis for resistance and counter tactics to the political-cultural functioning of the media.(Massumi, 43-44) For Rotman the grammaticalization of gesture holds the prospect of a new order of body mediation opening it to other desires and other semiotics. Pentland is equally optimistic. But his reflections on what quantification of the affective domain may offer sound more like a recipe for assimilation than resistance. Pentland writes:  
 
<br>By designing systems that are aware of human social signaling, and that adapt themselves to human social context, we may be able to remove the medium’s message and replace it with the traditional messaging of face-to-face communication. Just as computers are disappearing into clothing and walls, the otherness of communications technology might disappear as well, leaving us with organizations that are not only more efficient, but that also better balance our formal, informal, and personal lives. Assimilation into the Borg Collective might be inevitable, but we can still make it a more human place to live. (2005, 39)
 
<br>Computer scientist/novelist Vernor Vinge first outlined the notion that humans and intelligent machines are headed toward convergence, which he predicted would occur by 2030. (Vinge, 1993 Vinge also predicted a stage en route to the Singularity where networked, embedded, and location-aware microprocessors provide the basis for a global panopticon. (Vinge, 2000; Wallace, 2006) Vinge has remained steadfastly positive about the possibilities presaged in this era: “...collaborations will thrive. Remote helping flourishes; wherever you go, local experts can make you as effective as a native. We experiment with a thousand new forms of teamwork and intimacy.” (Vinge, 2000) Such systems are not only on the immediate horizon; they are patented and commercially available in the prototypes coming from the labs and companies founded by scientists such as Pentland, Maes and Rekimoto, each of whom is emphatic about the need to implement and insure privacy in the potentially panoptic systems they have developed. (Sense Networks, “Principles”). We need not fear the singularity; but beware the panopticon. <br><br>

Latest revision as of 13:57, 19 January 2012

Neurofutures4.jpg
Neurofutures4.jpg

Brain-Machine Interfaces and Collective Minds

ISBN: 978-1-60785-268-1

edited by Tim Lenoir

Introduction

The first practical steps of augmenting human capability through a close coupling of man and machine have their origins in Ivan Sutherland’s work at MIT and the University of Utah and in work by the generation of students Sutherland and his colleague, David Evans, trained at the University of Utah. Having launched the field of interactive computer-aided design in his dissertation project, Sketchpad, between 1965-1968 Sutherland pursued an ambitious project to create what he called “the ultimate display,” an augmented reality system in which computer generated images of all sorts could be overlaid on scenes viewed through a head-mounted camera display system. Among the visionary suggestions Sutherland made in this early work was that interaction with the computer need not be based on keyboard or joystick linkages but could be controlled through computer-based sensing of the positions of almost any of our body muscles; and going further, he noted that while gestural control through hands and arms were obvious choices, machines to sense and interpret eye motion data could and would be built. “An interesting experiment, he claimed, “will be to make the display presentation depend on where we look.” Sutherland’s work inspired Scott Fisher, Brenda Laurel, and Jaron Lanier, the inventors of the dataglove and first virtual reality and telepresence systems at NASA-Ames Research Center, and Tom Furness at Wright-Patterson Air Force Base in Ohio, who developed his own version of the ultimate display, based on eye and gesture tracking as a quasi “Darth-Vader Helmet” and integrated virtual cockpit. Furness was trying to solve problems of how humans interact with very complex machines, particularly the new high-tech F-16, F-14 and F-18 fighter planes, which were becoming so complicated that the amount of information a fighter pilot had to assimilate from the cockpit's instruments and command communications had become overwhelming. Furness’ solution was a cockpit that fed 3-D sensory information directly to the pilot, who could then fly by nodding and pointing his way through a simulated landscape below. (more...)

Readings

Vernon B. Mountcastle
The Columnar Organization of the Neocortex
Jonathan C. Horton and Daniel L. Adams
The Cortical Column: A Structure Without a Function
John A. Bargh, Tanya L. Chartrand
The Unbearable Automaticity of Being
Miguel A. L. Nicolelis, Asif A. Ghazanfar, Barbara M. Faggin, Scott Votaw, Laura M. O. Oliveira
Reconstructing the Engram: Simultaneous, Multisite, Many Single Neuron Recordings
John K. Chapin, Karen A. Moxon, Ronald S. Markowitz, Miguel A. L. Nicolelis
Real-time Control of a Robot Arm Using Simultaneously Recorded Neurons in the Motor Cortex
Jose M. Carmena, Mikhail A. Lebedev, Roy E. Crist, Joseph E. O'Doherty, David M. Santucci, Dragan F. Dimitrov, Parag G. Patil, Craig S. Henriquez, Miguel A. L. Nicolelis
Learning to Control a Brain–Machine Interface for Reaching and Grasping by Primates
Benjamin Blankertz, Michael Tangermann, Carmen Vidaurre, Siamac Fazli1, Claudia Sannelli, Stefan Haufe, Cecilia Maeder, Lenny Ramsey, Irene Sturm, Gabriel Curio, Klaus-Robert Müller
The Berlin Brain–Computer Interface: Non-Medical Uses of BCI Technology
Karl Deisseroth, Guoping Feng, Ania K. Majewska, Gero Miesenböck, Alice Ting, Mark J. Schnitzer
Next-Generation Optical Technologies for Illuminating Genetically Targeted Brain Circuits
Edward S. Boyden
A History of Optogenetics: The Development of Tools for Controlling Brain Circuits with Light
Olaf Sporns, Giulio Tononi, Rolf Kotter
The Human Connectome: A Structural Description of the Human Brain

Attributions

A 'Frozen' PDF Version of this Living Book

Download a 'frozen' PDF version of this book as it appeared on 7th October 2011