Neurofutures: Difference between revisions

No edit summary
No edit summary
 
(22 intermediate revisions by 4 users not shown)
Line 1: Line 1:
Brain-Machine Interfaces and Collective Minds
[[Image:Neurofutures4.jpg|right|318x450px|Neurofutures4.jpg]]


''edited by'' Tim Lenoir
Brain-Machine Interfaces and Collective Minds
__TOC__
[[Neuroengineering|Introduction]]<br>[[Neuroengineering/Brain-Machine Interfaces|Brain-Machine Interfaces]]
[[Neuroengineering/OptogeneticMapping|Optogenetic Mapping: Neurotechnology Renaissance]]<br>[[Neuroengineering/AugmentedReality|Ubiquious Computing and Augmented Reality]]<br>[[Neuroengineering/AffectiveTurn|The Affective Turn: Emotional Branding, Neuromarketing, and the New, New Media][[Neuroengineering/ConcludingThoughts|Concluding Thoughts ]]


<br>
[http://www.livingbooksaboutlife.org/books/ISBN_Numbers ISBN: 978-1-60785-268-1]
== Introduction ==


The first practical steps of augmenting human capability through a close coupling of man and machine have their origins in Ivan Sutherland’s work at MIT and the University of Utah and in work by the generation of students Sutherland and his colleague, David Evans, trained at the University of Utah. Having launched the field of interactive computer-aided design in his dissertation project, Sketchpad, between 1965-1968 Sutherland pursued an ambitious project to create what he called “[http://Citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.136.3720&rep=rep1&type=pdf the ultimate display],” an augmented reality system in which computer generated images of all sorts could be overlaid on scenes viewed through a head-mounted camera display system. Among the visionary suggestions Sutherland made in this early work was that interaction with the computer need not be based on keyboard or joystick linkages but could be controlled through computer-based sensing of the positions of almost any of our body muscles; and going further, he noted that while gestural control through hands and arms were obvious choices, machines to sense and interpret eye motion data could and would be built. “An interesting experiment, he claimed, “will be to make the display presentation depend on where we look.” Sutherland’s work inspired Scott Fisher, Brenda Laurel, and Jaron Lanier, the inventors of the dataglove and first virtual reality and telepresence systems at NASA-Ames Research Center, and Tom Furness at Wright-Patterson Air Force Base in Ohio, who developed his own version of the ultimate display, based on eye and gesture tracking as a quasi “Darth-Vader Helmet” and integrated virtual cockpit. Furness was trying to solve problems of how humans interact with very complex machines, particularly the new high-tech F-16, F-14 and F-18 fighter planes, which were becoming so complicated that the amount of information a fighter pilot had to assimilate from the cockpit's instruments and command communications had become overwhelming. Furness’ solution was a cockpit that fed 3-D sensory information directly to the pilot, who could then fly by nodding and pointing his way through a simulated landscape below.
''edited by'' [http://www.livingbooksaboutlife.org/books/Neurofutures/bio Tim Lenoir]
__TOC__
== [http://www.livingbooksaboutlife.org/books/Neurofutures/Introduction Introduction]  ==


These pathbreaking projects on augmented and virtual reality, and telepresence controlled by gesture and eye-tracking systems inspired a number of visionary efforts over the next generation to go all the way in creating the ultimate display by eliminating the screen and tethered systems depicted above altogether by directly interfacing brains and machines. In what follows I will trace lines of synergy and convergence among several areas of neuroscience, genetics, engineering, and computational media that have given rise to brain/computer/machine interfaces that may at first glance seem like the stuff of science fiction or the techno-enthusiast predictions of Singularians and Transhumanists but may be closer than you think to being realized and quite possibly transforming human being as we know it in radical ways. I begin with work in brain-machine interfaces currently used in therapeutic neuroprosthetics emanating from the pioneering work of the Utah Intracortical Electrode Array, engage with the visionary speculations neuroengineers such as Miguel Nicolelis at Duke on their future deployment in ubiquitous computing networks, and contemplate the implications of these prospective developments for reconfigured selves. The second area I will explore is the convergence of work in the cognitive neurosciences on the massive role of affect in decision making and the leveraging of next-generation social media and smart devices as the “brain-machine” interfaces for measuring, data mining, modeling, and mapping affect in strategies to empower individuals to be more efficient, productive, and satisfied members of human collectives. If these speculations have merit, we may want to invest in “neurofutures”—very soon. (More: [http://www.livingbooksaboutlife.org/books/Neuroengineering/Brain-Machine_Interfaces Brain-Machine Interfaces])<br>
The first practical steps of augmenting human capability through a close coupling of man and machine have their origins in Ivan Sutherland’s work at MIT and the University of Utah and in work by the generation of students Sutherland and his colleague, David Evans, trained at the University of Utah. Having launched the field of interactive computer-aided design in his dissertation project, Sketchpad, between 1965-1968 Sutherland pursued an ambitious project to create what he called “[http://Citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.136.3720&rep=rep1&type=pdf the ultimate display],” an augmented reality system in which computer generated images of all sorts could be overlaid on scenes viewed through a head-mounted camera display system. Among the visionary suggestions Sutherland made in this early work was that interaction with the computer need not be based on keyboard or joystick linkages but could be controlled through computer-based sensing of the positions of almost any of our body muscles; and going further, he noted that while gestural control through hands and arms were obvious choices, machines to sense and interpret eye motion data could and would be built. “An interesting experiment, he claimed, “will be to make the display presentation depend on where we look.” Sutherland’s work inspired Scott Fisher, Brenda Laurel, and Jaron Lanier, the inventors of the dataglove and first virtual reality and telepresence systems at NASA-Ames Research Center, and Tom Furness at Wright-Patterson Air Force Base in Ohio, who developed his own version of the ultimate display, based on eye and gesture tracking as a quasi “Darth-Vader Helmet” and integrated virtual cockpit. Furness was trying to solve problems of how humans interact with very complex machines, particularly the new high-tech F-16, F-14 and F-18 fighter planes, which were becoming so complicated that the amount of information a fighter pilot had to assimilate from the cockpit's instruments and command communications had become overwhelming. Furness’ solution was a cockpit that fed 3-D sensory information directly to the pilot, who could then fly by nodding and pointing his way through a simulated landscape below. [http://www.livingbooksaboutlife.org/books/Neurofutures/Introduction (more...)]  


== References ==
== Readings==
; Vernon B. Mountcastle : [http://brain.oxfordjournals.org/content/120/4/701.long The Columnar Organization of the Neocortex]
; Jonathan C. Horton and Daniel L. Adams : [http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1569491/?tool=pubmed The Cortical Column: A Structure Without a Function]
; John A. Bargh, Tanya L. Chartrand : [http://www.yale.edu/acmelab/articles/bargh_chartrand_1999.pdf The Unbearable Automaticity of Being]
; Miguel A. L. Nicolelis, Asif A. Ghazanfar, Barbara M. Faggin, Scott Votaw, Laura M. O. Oliveira : [http://www.princeton.edu/~asifg/old/pdfs/ReconstructingtheEngram-Nicolelis%20et%20al..pdf Reconstructing the Engram: Simultaneous, Multisite, Many Single Neuron Recordings]
; John K. Chapin, Karen A. Moxon, Ronald S. Markowitz, Miguel A. L. Nicolelis : [http://www.neuro-it.net/pdf_dateien/summer_2004/Chapin%201999.pdf Real-time Control of a Robot Arm Using Simultaneously Recorded Neurons in the Motor Cortex]
; Jose M. Carmena, Mikhail A. Lebedev, Roy E. Crist, Joseph E. O'Doherty, David M. Santucci, Dragan F. Dimitrov, Parag G. Patil, Craig S. Henriquez, Miguel A. L. Nicolelis : [http://www.plosbiology.org/article/info:doi/10.1371/journal.pbio.0000042 Learning to Control a Brain–Machine Interface for Reaching and Grasping by Primates]
; Benjamin Blankertz, Michael Tangermann, Carmen Vidaurre, Siamac Fazli1, Claudia Sannelli, Stefan Haufe, Cecilia Maeder, Lenny Ramsey, Irene Sturm, Gabriel Curio, Klaus-Robert Müller : [http://www.frontiersin.org/neuroprosthetics/10.3389/fnins.2010.00198/full The Berlin Brain–Computer Interface: Non-Medical Uses of BCI Technology]
; Karl Deisseroth, Guoping Feng, Ania K. Majewska, Gero Miesenböck, Alice Ting, Mark J. Schnitzer : [http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2820367/ Next-Generation Optical Technologies for Illuminating Genetically Targeted Brain Circuits]
; Edward S. Boyden : [http://f1000.com/reports/b/3/11 A History of Optogenetics: The Development of Tools for Controlling Brain Circuits with Light]
; Olaf Sporns, Giulio Tononi, Rolf Kotter : [http://jhfc.duke.edu/jenkins/pubshare/LivingBooks_UploadFiles/21_Sporns_HumanConnectome_2005.pdf The Human Connectome: A Structural Description of the Human Brain]


Sutherland, Ivan E.&nbsp; "[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.91.3584&rep=rep1&type=pdf A Head-Mounted Three Dimensional Display." In ''Proceedings of the December 9-11, 1968, Fall Joint Computer Conference'', part I, 757-64]. San Francisco, California: ACM, 1968.<br>Furness, Thomas A. III. "The Super Cockpit and Its Human Factors Challenges."''Human Factors and Ergonomics Society Annual Meeting Proceedings'' 30, no. 1 (1986 ): 48-52.<br>Furness, Thomas A. III, and Dean F. Kocian. "[http://www.hitl.washington.edu/publications/r-86-1/ Putting Humans into Virtual Space]." Technical Report, Human Interface Technology Lab, University of Washington (1986).
== [http://www.livingbooksaboutlife.org/books/Neurofutures/Attributions Attributions] ==


<br>
== A 'Frozen' PDF Version of this Living Book ==
; [http://livingbooksaboutlife.org/pdfs/bookarchive/Neurofutures.pdf Download a 'frozen' PDF version of this book as it appeared on 7th October 2011]

Latest revision as of 13:57, 19 January 2012

Neurofutures4.jpg
Neurofutures4.jpg

Brain-Machine Interfaces and Collective Minds

ISBN: 978-1-60785-268-1

edited by Tim Lenoir

Introduction

The first practical steps of augmenting human capability through a close coupling of man and machine have their origins in Ivan Sutherland’s work at MIT and the University of Utah and in work by the generation of students Sutherland and his colleague, David Evans, trained at the University of Utah. Having launched the field of interactive computer-aided design in his dissertation project, Sketchpad, between 1965-1968 Sutherland pursued an ambitious project to create what he called “the ultimate display,” an augmented reality system in which computer generated images of all sorts could be overlaid on scenes viewed through a head-mounted camera display system. Among the visionary suggestions Sutherland made in this early work was that interaction with the computer need not be based on keyboard or joystick linkages but could be controlled through computer-based sensing of the positions of almost any of our body muscles; and going further, he noted that while gestural control through hands and arms were obvious choices, machines to sense and interpret eye motion data could and would be built. “An interesting experiment, he claimed, “will be to make the display presentation depend on where we look.” Sutherland’s work inspired Scott Fisher, Brenda Laurel, and Jaron Lanier, the inventors of the dataglove and first virtual reality and telepresence systems at NASA-Ames Research Center, and Tom Furness at Wright-Patterson Air Force Base in Ohio, who developed his own version of the ultimate display, based on eye and gesture tracking as a quasi “Darth-Vader Helmet” and integrated virtual cockpit. Furness was trying to solve problems of how humans interact with very complex machines, particularly the new high-tech F-16, F-14 and F-18 fighter planes, which were becoming so complicated that the amount of information a fighter pilot had to assimilate from the cockpit's instruments and command communications had become overwhelming. Furness’ solution was a cockpit that fed 3-D sensory information directly to the pilot, who could then fly by nodding and pointing his way through a simulated landscape below. (more...)

Readings

Vernon B. Mountcastle
The Columnar Organization of the Neocortex
Jonathan C. Horton and Daniel L. Adams
The Cortical Column: A Structure Without a Function
John A. Bargh, Tanya L. Chartrand
The Unbearable Automaticity of Being
Miguel A. L. Nicolelis, Asif A. Ghazanfar, Barbara M. Faggin, Scott Votaw, Laura M. O. Oliveira
Reconstructing the Engram: Simultaneous, Multisite, Many Single Neuron Recordings
John K. Chapin, Karen A. Moxon, Ronald S. Markowitz, Miguel A. L. Nicolelis
Real-time Control of a Robot Arm Using Simultaneously Recorded Neurons in the Motor Cortex
Jose M. Carmena, Mikhail A. Lebedev, Roy E. Crist, Joseph E. O'Doherty, David M. Santucci, Dragan F. Dimitrov, Parag G. Patil, Craig S. Henriquez, Miguel A. L. Nicolelis
Learning to Control a Brain–Machine Interface for Reaching and Grasping by Primates
Benjamin Blankertz, Michael Tangermann, Carmen Vidaurre, Siamac Fazli1, Claudia Sannelli, Stefan Haufe, Cecilia Maeder, Lenny Ramsey, Irene Sturm, Gabriel Curio, Klaus-Robert Müller
The Berlin Brain–Computer Interface: Non-Medical Uses of BCI Technology
Karl Deisseroth, Guoping Feng, Ania K. Majewska, Gero Miesenböck, Alice Ting, Mark J. Schnitzer
Next-Generation Optical Technologies for Illuminating Genetically Targeted Brain Circuits
Edward S. Boyden
A History of Optogenetics: The Development of Tools for Controlling Brain Circuits with Light
Olaf Sporns, Giulio Tononi, Rolf Kotter
The Human Connectome: A Structural Description of the Human Brain

Attributions

A 'Frozen' PDF Version of this Living Book

Download a 'frozen' PDF version of this book as it appeared on 7th October 2011