Neurofutures: Difference between revisions

No edit summary
No edit summary
Line 14: Line 14:


<br> <br>
<br> <br>
=== Ubiquitous Computing and Augmented Reality  ===
The infrastructure of ubiquitous computing envisioned two decades ago by Mark Weiser and John Seely Brown offers the nutrient matrix for posthuman extended minds proposed by Andy Clark and the collective paraselves fantasized by Nicolelis’s Brain-Machine Interfaces and theorized beautifully in Brian Rotman’s discussion of paraselves. (Weiser: 1991, 1994; Weiser and Brown: 1996; Clark, 2004; Clark, 2010; Rotman, Becoming Beside Ourselves, 2008): namely, a world in which computation would disappear from the desktop and merge with the objects and surfaces of our ambient environment. (Greenfield: 2006) Rather than taking work to a desktop computer, many tiny computing devices would be spread throughout the environment, in computationally enhanced walls, floors, pens and desks seamlessly integrated into everyday life. We are still far from realizing Weiser’s vision of computing for the twenty-first century. Apart from the fact that nearly every piece of technology we use has one or more processors in it, we are far from reaching the transition point to ubiquitous computing when the majority of those processors are networked and addressable. But we are getting there. There have already been a number of milestones along the road to ubiquitous computing. Inspired by efforts from 1989-1995 at Olivetti and Xerox PARC to develop invisible interfaces interlinking coworkers with electronic badges and early RFID tags (Want, 1992,1995,1999), the Hewlett Packard Cooltown project (2000-2005) offered a prototype architecture for linking everyday physical objects to Web pages by tagging them with infrared beacons, RFID tags, and bar codes. Users carrying PDAs, tablets, and other mobile devices could read those tags to view Web pages about the world around them and engage services, such as printers, radios, automatic call forwarding and continually updated maps for finding like-minded colleagues in locations such as conference settings. (Barton, 2001; Kindberg, 2002)
While systematically constructed ubiquitous cities based on the Cooltown model have yet to take hold, many of the enabling features of ubiquitous computing environments are arising in ad hoc fashion fuelled primarily by growing mass consumption worldwide of social networking applications and the wildly popular new generation smart phones with advanced computing capabilities, cameras, accelerometers, and a variety of readers and sensors. In response to this trend and building on a decade of Japanese experience with Quick Response (QR) barcodes, in December 2009 Google dispatched approximately 200,000 stickers with bar codes for the windows of its “Favorite Places” in the US, so that people can use their smart phones to find out about them. Besides such consumer-oriented uses, companies like Wal-Mart and other global retailers now routinely use RFID tags to manage industrial supply chains. These practices are now indispensable for hospital and other medical environments. Such examples are the tip of the iceberg of increasingly pervasive computing applications for the masses. Consumer demand for electronically mediated pervasive “brand zones” such as Apple Stores, Prada Epicenters, and the interior of your BMW where movement, symbols, sound, and smell all reinforce the brand message turning shopping spaces/driving experiences into engineered synesthetic environments are powerful aphrodisiacs for pervasive computing.
Even these pathbreaking developments fall short of Weiser’s vision which was to engage multiple computational devices and systems simultaneously during ordinary activities without having to interact with a computer through mouse, keyboard and desktop monitor and without necessarily being aware of doing so. In the years since these first experimental systems rapid advances have taken place in mobile computing, including: new smart materials capable of supporting small, lightweight, wearable mobile cameras and communications devices; many varieties of sensor technologies; RFID tags; physical storage on “motes” or “mu-chips”, such as HP’s Memory Spot system which permits storage of large media files on tiny chips instantly accessible by a PDA (McDonnell, 2010); Bluetooth; numerous sorts of GIS applications for location logging (eg., Sony’s PlaceEngine and LifeTagging system); wearable biometric sensors (eg., BodyMedia, SenseWear). To realize Weiser’s vision though, we must further augment these sorts of breakthroughs by getting the attention-grabbing gadgets, smart phones and tablets out of our hands and begin interacting within computer-mediated environments the way we normally do with other persons and things. Here, too, recent advancements have been enormous, particularly advances in gesture and voice recognition technologies coupled with new forms of tangible interface and information displays. (Rekimoto, 2008)
Two prominent examples are the stunning gesture recognition capabilities in the Microsoft Kinect system for the Xbox, which dispenses with a game controller altogether in favor of gesture recognition as game interface and the EPOC headset brain controller system from Emotive Systems. But for our purposes in exploring some of the current routes to neuromarketing and the emergence of a brain coprocessor, the SixthSense prototype developed by Pranav Mistry and Pattie Maes at MIT points even more dramatically to an untethered fusion of the virtual and the real central to Weiser’s vision. (Mistry, 2009a; 2009b; 2009c) The SixthSense prototype comprises a pocket projector, a mirror and a camera built into a small mobile wearable device. Both the projector and the camera are connected to a mobile computing device in the user’s pocket. The camera recognizes objects instantly, with the micro-projector overlaying the information on any surface, including the object itself or the user’s hand. Then the user can access or manipulate the information using his/her fingers. The movements and arrangements of markers on the user’s hands and fingers are interpreted into gestures that activate instructions for a wide variety of applications projected as application interfaces—search, video, social networking, basically the entire Web. SixthSense also supports multi-touch and multi-user interaction.
Insert Figure 1: a, b, c about here.<br>Figure 1: Pranav Mistry and Pattie Maes, SixthSense. The system comprises a pocket projector, a mirror and a camera built into a wearable device connected to a mobile computing platform in the user’s pocket. The camera recognizes objects instantly, with the micro-projector overlaying the information on any surface, including the object itself or the user’s hand. (Photos courtesy of Pranav Mistry)
<br> <br> <br>a. Active phone keyboard overlayed on user’s hand. b. Camera recognizes flight coupon and projects departure update on the ticket c. Camera recognizes news story from web and streams video to the page.
<br>Thus far we have emphasized technologies that are enabling the rise of pervasive computing, but ‘ubiquitous computing’ not only denotes a technical thrust; it is equally a socio-cultural formation, an imaginary and a source of desire. From our perspective its power becomes transformative in permeating the affective domain, the machinic unconscious. Perhaps the most significant development driving this reconfiguration of affect are the phenomena of social networking and the use of “smart phones.” More people are not only spending more time online; they are seeking to do it together with other wired “friends.” Surveys by the Pew Internet &amp; American Life Project report that between 2005-2008 use of social networking sites by online American adults 18 and older quadrupled from 8% to 46% and that 65% of teens 12-17 used social networking sites like Facebook, MySpace, or LinkedIn. The Neilsen Company reports that 22&nbsp;% of all time spent online is devoted to social network sites. (NeilsenWire, June 15) Moreover, the new internet generation wants to connect up in order to share: the Pew Internet &amp; American Life Project has found that 64% of online teens ages 12-17 have participated in a wide range of content-creating and sharing activities on the internet, 39% of online teens share their own artistic creations online, such as artwork, photos, stories, or videos, while 26% remix content they find online into their own creations.(Lenhart, 2010, “Social Media”) The desire to share is not limited to text and video, but is extending to data-sharing of all sorts. Sleep, exercise, sex, food, mood, location, alertness, productivity, even spiritual well-being are being tracked and measured, shared and displayed. On MedHelp, one of the largest Internet forums for health information, more than 30,000 new personal tracking projects are started by users every month. Foursquare, a geo-tracking application with about one million users, keeps a running tally of how many times players “check in” at every locale, automatically building a detailed diary of movements and habits; many users publish these data widely. (Wolf, 2010) Indeed, 60% of internet users are not concerned about the amount of information available about them online, and 61% of online adults do not take steps to limit that information. Just 38% say they have taken steps to limit the amount of online information that is available about them. (Madden: 2007, 4) As Kevin Kelly points out we are witnessing a feedback loop between new technologies and the creation of desire. The explosive development of mobile, wireless communications, widespread use of RFID tags, Bluetooth, embedded sensors, QR addressing, applications like Shazam for snatching a link and downloading music in your ambient environment, GIS applications of all sorts, social phones such as numerous types of Android phones and the iPhone4 that emphasize social networking are creating desire for open sharing, collaboration, even communalism, and above all a new kind of mind. (Kelly: 2009a, 2009b) [See Figure 2: a, b, c]
<br>
Insert Figure 2: a, b, c about here]<br>Figure 2: 3D mapping, location aware applications, and augmented reality browsers. <br>(Photos courtesy of Earthmine.com and Layer.com)
<br> <br> <br> <br>a. Earthmine attaches location aware apps (in this case streaming video) to specific real-world locations. b. Earthmine enables 3D objects to be overlaid on specific locations. c. Layar augmented reality browser overlays information, graphics, and animation on specific locations.
<br>


=== <br>The Affective Turn, Emotional Branding and Neuromarketing  ===
=== <br>The Affective Turn, Emotional Branding and Neuromarketing  ===

Revision as of 15:29, 26 September 2011

Neurofutures: Brain-Machine Interfaces and Collective Minds.
The Affective Turn and the New, New Media.

Tim Lenoir
Duke University

Introduction


The first practical steps of augmenting human capability through a close coupling of man and machine have their origins in Ivan Sutherland’s work at MIT and the University of Utah and in work by the generation of students Sutherland and his colleague, David Evans, trained at the University of Utah. Having launched the field of interactive computer-aided design in his dissertation project, Sketchpad, between 1965-1968 Sutherland pursued an ambitious project to create what he called “the ultimate display,” an augmented reality system in which computer generated images of all sorts could be overlaid on scenes viewed through a head-mounted camera display system. Among the visionary suggestions Sutherland made in this early work was that interaction with the computer need not be based on keyboard or joystick linkages but could be controlled through computer-based sensing of the positions of almost any of our body muscles; and going further, he noted that while gestural control through hands and arms were obvious choices, machines to sense and interpret eye motion data could and would be built. “An interesting experiment, he claimed, “will be to make the display presentation depend on where we look.” Sutherland’s work inspired Scott Fisher, Brenda Laurel, and Jaron Lanier, the inventors of the dataglove and first virtual reality and telepresence systems at NASA-Ames Research Center, and Tom Furness at Wright-Patterson Air Force Base in Ohio, who developed his own version of the ultimate display, based on eye and gesture tracking as a quasi “Darth-Vader Helmet” and integrated virtual cockpit. Furness was trying to solve problems of how humans interact with very complex machines, particularly the new high-tech F-16, F-14 and F-18 fighter planes, which were becoming so complicated that the amount of information a fighter pilot had to assimilate from the cockpit's instruments and command communications had become overwhelming. Furness’ solution was a cockpit that fed 3-D sensory information directly to the pilot, who could then fly by nodding and pointing his way through a simulated landscape below.

These pathbreaking projects on augmented and virtual reality, and telepresence controlled by gesture and eye-tracking systems inspired a number of visionary efforts over the next generation to go all the way in creating the ultimate display by eliminating the screen and tethered systems depicted above altogether by directly interfacing brains and machines. In what follows I will trace lines of synergy and convergence among several areas of neuroscience, genetics, engineering, and computational media that have given rise to brain/computer/machine interfaces that may at first glance seem like the stuff of science fiction or the techno-enthusiast predictions of Singularians and Transhumanists but may be closer than you think to being realized and quite possibly transforming human being as we know it in radical ways. I begin with work in brain-machine interfaces currently used in therapeutic neuroprosthetics emanating from the pioneering work of the Utah Intracortical Electrode Array, engage with the visionary speculations neuroengineers such as Miguel Nicolelis at Duke on their future deployment in ubiquitous computing networks, and contemplate the implications of these prospective developments for reconfigured selves. The second area I will explore is the convergence of work in the cognitive neurosciences on the massive role of affect in decision making and the leveraging of next-generation social media and smart devices as the “brain-machine” interfaces for measuring, data mining, modeling, and mapping affect in strategies to empower individuals to be more efficient, productive, and satisfied members of human collectives. If these speculations have merit, we may want to invest in “neurofutures”—very soon. (More: Brain-Machine Interfaces)





The Affective Turn, Emotional Branding and Neuromarketing

A number of critical theorists, including Deleuze and Guattari, Brian Massumi, Bernard Stiegler, Patricia Clough, and more recently Hardt and Negri have observed that under globalization capitalism has shifted from production and consumption to focusing on the economic circulation of pre-individual bodily capacities or affects in the domain of biopolitical control. At the same moment these scholars were urging us to pay heed to the affective turn and the radicalizing shift taking place in capitalism, marketing theorists and “mad men” were similarly becoming sensitized to the shifts taking place in capitalism. At the end of the 1990s major marketing gurus, such as Marc Gobé, pointed out to their marketing colleagues that the world is clearly moving from an industrially driven economy toward a people-driven economy that puts the consumer in the seat of power; and as we are all becoming painfully aware, over the past fifty years the economic base has shifted from production to consumption. Marketers have embraced the challenges of this new reality with new strategies. Gobé pointed out that what used to be straightforward functional ideas, such as computers, have morphed from “technology equipment” into larger, consumer-focused concepts such as “lifestyle entertainment.” Food is no longer about cooking or chores but about home/lifestyle design and “sensory experiences;” Even before neuroscience entered the marketing scene Gobé and his marketing colleagues were coming to terms with what the theorists I have named above have called the immaterial labor of affect. Gobé called the visionary approach he was offering to address the new capitalist realities “Emotional Branding,” but what he had in mind was more than simple emotion and closer to what we call affect. “By emotional,” Gobé meant, “how a brand engages consumers on the level of the senses and emotions; how a brand comes to life for people and forges a deeper, lasting connection... It focuses on the most compelling aspect of the human character; the desire to transcend material satisfaction, and experience emotional fulfillment. A brand is uniquely situated to achieve this because it can tap into the aspirational drives which underlie human motivation.”

Nearly every academic discipline from art history and visual studies to critical theory and recently even the bastions of economics have been moved in one way or another to get on the affective bandwagon. Recent neuroscience points to an entirely new set of constructs underlying economic decision-making. The standard economic theory of constrained utility maximization is most naturally interpreted either as the result of learning based on consumption experiences, or careful deliberation—a balancing of the costs and benefits of different options—as might characterize complex decisions like planning for retirement, buying a house, or hammering out a contract. While not denying that deliberation is part of human decision making, neuroscience points out two generic inadequacies of this approach—its inability to handle the crucial roles of automatic and emotional processing.

A body of empirical research spanning the past fifteen years, too large to discuss here, has documented the range and extent of complex psychological functions that can transpire automatically, triggered by environmental events and without an intervening act of conscious will or subsequent conscious guidance.(Bargh, 1999; 2000; Hassin, 2005) First, much of the brain implements “automatic” processes, which are faster than conscious deliberations and which occur with little or no awareness or feeling of effort (John Bargh et al. 1996; Bargh and Tanya Chartrand 1999). Because people have little or no introspective access to these processes, or volitional control over them, and these processes were evolved to solve problems of evolutionary importance rather than respect logical dicta, the behavior these processes generate need not follow normative axioms of inference and choice. Second, our behavior is strongly influenced by finely tuned affective (emotion) systems whose basic design is common to humans and many animals (Joseph LeDoux 1996; Jaak Panksepp 1998; Edmund Rolls 1999). These systems are essential for daily functioning, and when they are damaged or perturbed, by brain injury, stress, imbalances in neurotransmitters, or the “heat of the moment,” the logical-deliberative system— even if completely intact—cannot regulate behavior appropriately. Human behavior thus requires a fluid interaction between controlled and automatic processes, and between cognitive and affective systems. A number of studies by Damasio and his colleagues have shown that deliberative action cannot take place in the absence of affective systems. However, many behaviors that emerge from this interplay are routinely and falsely interpreted as being the product of cognitive deliberation alone (George Wolford, Michael Miller, and Michael Gazzaniga 2000). These results suggest that introspective accounts of the basis for choice should be taken with a grain of salt. Because automatic processes are designed to keep behavior “off-line” and below consciousness, we have far more introspective access to controlled than to automatic processes. Since we see only the top of the automatic iceberg, we naturally tend to exaggerate the importance of control. Taking these findings onboard, a growing vanguard of “neuroeconomists” are arguing that economic theory ought to take the findings of neuroscience and neuromarketing seriously.(Perrachione and Perrachione, “Brains and Brands” 2008)

But even in advance of engineering solutions to building neurochips and neuro-coprocessors a burgeoning “adfotainment-industrial complex” is emerging that marries an applied science of affect with media and brand analysis. Among the most successful entrants in this field are MindSign Neuromarketing, a San Diego firm that engages media and game companies to fine-tune their products through the company’s techniques of “neurocinema,” the real-time monitoring of the brain’s reaction to movies by using fMRI technology, eye-tracking, galvanic skin response and other scanning techniques to monitor the amygdale while test subjects watch a movie or play a game. MindSign examines subject brain response “to your ad, game, speech, or film. We look at how well and how often it engages the areas for attention/emotion/memory/and personal meaning (importance).” MindSign cofounder Philip Carlsen said in an NPR interview that he foresees a future where directors send their dailies (raw footage fresh from the set) to the MRI lab for optimization. “You can actually make your movie more activating,” he said, “based on subjects’ brains. We can show you how your product is affecting the consumer brain even before the consumer is able to say anything about it.” The leaders in this adfotainment-industrial complex are not building on pseudoscience but have close connections to major neuroscience labs and employ some of the leading researchers of the neuroscience of affect on their teams. NeuroFocus, located in Berkeley, California, was founded by UC Berkeley-trained engineer, Dr. A.K. Pradeep and has a team of scientists working with the firm that includes Robert T. Knight, the director of the Helen Willis Neuroscience Institute at UC Berkeley. NeuroFocus was recently acquired by the powerful Nielsen Company.

I want to consider the convergence of these powerful tools of neuro-analysis and media in light of what some theorists have considered the potential of our increasing symbiosis with media technology for reconfiguring the human. Our new collective minds are deeply rooted in an emerging corporeal axiomatic, the domain identified by Felix Guattari as the machinic unconscious and elaborated by Patricia Clough as a “teletechnological machinic unconscious”(Clough, Autoaffection, 2000)—a wide range of media ecologies, material practices, social apparatuses for encoding and enforcing ways of behaving through routines, patterns of movement and gestures, as well as haptic and even neurological patterning/re-patterning that facilitate specific behaviors and modes of action.(Guattari, 2009) In this model technological media are conjoined with unconscious and preconscious cognitive activity to constitute subjects in particular, medium-specific directions.

The affective domain is being reshaped by electronic media. Core elements of the domain of affect are unconscious social signals, primarily consisting of body language, facial expressions, and tone of voice. These social signals are not just a complement to conscious language; they form a separate communication network that influences behavior, and can provide a window into our intentions, goals, and values. Much contemporary research in cognitive science and other areas of social psychology is reaffirming that humans are intensely social animals and that our behavior is much more a function of our social networks than anyone has previously imagined. The social circuits formed by the back-and-forth pattern of unconscious signaling between people shapes much of our behavior in families, work groups and larger organizations. (Pentland, 2007, “Collective Nature of Human Intelligence”) By paying careful attention to the patterns of signaling within a social network, Pentland and others are demonstrating that it is possible to harvest tacit knowledge that is spread across the network’s individuals. While our hominid ancestors communicated face-to-face through voice, face, and hand gestures, our communications today are increasingly electronically mediated, our social groups dispersed and distributed. But this does not mean that affect has disappeared or somehow been stripped away. On the contrary, as the “glue” of social life, affect is present in the electronic social signals that link us together. The domain of affect is embedded within and deeply intertwined with these pervasive computing networks. The question is, as we become more socially interlinked than ever through electronic media can the domain of affect be accessed, measured, perhaps understood and possibly manipulated for better or worse?

A number of researchers are developing systems to access, record and map the domain of affect, including a suite of applications by Sony Interaction Laboratory director Jun Rekimoto (Rekimoto, 2006; 2007a; 2007b; 2010) such as the Affect Phone and a LifeLogging system coupled with an augmented reality and a multiperson awareness medium for connecting distant friends and family developed by Pattie Maes’ group at MIT. For the past five years Sandy Pentland and his students at the MIT Media Lab have been working on what they call a socioscope for accessing the affective domain in order to make new social networked media smarter by analyzing prosody, gesture, and social context. The socioscope consists of three main parts: “smart” phones programmed to keep track of their owners’ locations and their proximity to other people by sensing cell tower and Bluetooth IDs; electronic badges that record the wearers’ locations, ambient audio, and upper body movement via a two-dimensional accelerometer; and a microphone with body-worn camera to record the wearers’ context, and software that is used to extract audio “signals”, specifically, the exact timing of individuals’ vocalizations and the amount of modulation (in both pitch and amplitude) of those vocalizations. Unlike most speech or gesture research, the goal is to measure and classify speaker interaction rather than trying to puzzle out the speakers’ meanings or intentions.

One implementation of this technology is the Serendipity system, which is implemented on Bluetooth-enabled mobile phones and built on BlueAware, an application that scans for other Bluetooth devices in the user’s proximity. (Eagle, 2005) When Serendipity discovers a new device nearby, it automatically sends a message to a social gateway server with the discovered device’s ID. If it finds a match, it sends a customized picture message to each user, introducing them to one another. The phone extracts the social signaling features as a background process so that it can provide feedback to the user about how that person sounded and to build a profile of the interactions the user had with the other person. The power of this system is that it can be used to create, verify, and better characterize relationships in online social network systems, such as Facebook, MySpace, and LinkedIn. A commercial application of this technology is Citysense, which acquires millions of data points to analyze aggregate human behavior and to develop a live map of city activity, learns about where each user likes to spend time and processes the movements of other users with similar patterns. Citysense displays not only "where is everyone right now" on the user’s PDA but "where is everyone like me right now." (Sense Networks, 2008)

There are a number of implications of this technology for quantifying the machinic unconscious of social signals. Enabling machines to know social context will enhance many forms of socially aware communication, and indeed, the idea is to overcome some of the major drawbacks in our current use of computationally mediated forms of communication. For example, having a quantifiable model of social context will permit the mapping of group structures, information flows, identification of enabling nodes and bottlenecks, and provide feedback on group interactions: Did you sound forceful during a negotiation? Did you sound interested when you were talking to your spouse? Did you sound like a good team member during the teleconference?


I want to close these reflections by pointing to two newly introduced technologies that build upon the some of the same data-mining techniques for creating profiles discussed in Pentland’s CitySense programs. Of these final two, the least invasive new technology I want to highlight is Streetline, a company that realizes many of the innovations first experimented with in Cooltown and incorporates low power mesh technologies first developed in the MOTES project at Berkeley in the late 1990s. Streetline, a San Francisco-based tech firm, was selected as the winner by the IBM Global Entrepreneurship Program's SmartCamp 2010 for developing the free Parker app which not only shows you where parking meters are located, but also shows you which meters are available. Forget circling a five-block radius waiting for a spot to appear. With this app (available for iPhone and Android) you can pinpoint and snag that elusive space. Streetline captures data using self-powered motes, sensors mounted in the ground at each parking space, which can detect whether or not a space is vacant. The Parker app uses your smartphone's location sensors to know where you are and highlight local parking spots. It also uses the large screen (in your car for instance) to display a dynamic map of the nearest spots (rather than just display a list of street addresses). The parking meter data from the sensors is transmitted across ultra-low power mesh networks to Streetline servers which build a real-time picture of which parking meters are vacant. This information can be shared with drivers through the Parker app, and also with city officials, operators and policy managers. The app even goes further: once you park, the app uses this information to provide walking directions back to your vehicle and can record how much time you have on the meter and alert you when time is getting short. This is a truly cool app.
But this app is on a spectrum of technologies that use cell-phone data to track and trace your location. A more disturbing surveillance-use of new media technology combined with data-mining and profiling tools is Immersive Labs of New York, which uses webcams embedded in billboards and display systems in public areas, such as Times Square, an airport, or theme park, to grab footage of passers-by for facial recognition tools to measure the impact of an ad running on the screen. In this application artificial Intelligence software makes existing digital signs smarter, sequences ads, and pushes media to persons in front of the screen. Immersive Labs software makes real-time decisions on what ads to display based on current weather, gender, age, crowd, and attention time of the audience. The technology can adapt to multiple environments and ads on a single screen and works with both individuals and large groups. Using a standard web cam connected to any existing digital screen to determine age, gender, attention time and automatically schedule targeted advertising content. The software calculates the probability of success for each advertisement and makes real-time decisions of what ad should play next. The analytics report on ad performance and demographics (e.g., gender, age, distance, attention time, dwell time, gazes). The company claims not to store the images of individuals it has analyzed but immediately discards them after the interaction—we’re not so sure.


Conclusion

Brian Rotman and Brian Massumi are both optimistic about what access to the affective domain might occasion for our emerging posthuman communal mind. For Massumi, better grasping the domain of affect will provide a basis for resistance and counter tactics to the political-cultural functioning of the media.(Massumi, 43-44) For Rotman the grammaticalization of gesture holds the prospect of a new order of body mediation opening it to other desires and other semiotics. Pentland is equally optimistic. But his reflections on what quantification of the affective domain may offer sound more like a recipe for assimilation than resistance. Pentland writes:


By designing systems that are aware of human social signaling, and that adapt themselves to human social context, we may be able to remove the medium’s message and replace it with the traditional messaging of face-to-face communication. Just as computers are disappearing into clothing and walls, the otherness of communications technology might disappear as well, leaving us with organizations that are not only more efficient, but that also better balance our formal, informal, and personal lives. Assimilation into the Borg Collective might be inevitable, but we can still make it a more human place to live. (2005, 39)


Computer scientist/novelist Vernor Vinge first outlined the notion that humans and intelligent machines are headed toward convergence, which he predicted would occur by 2030. (Vinge, 1993 Vinge also predicted a stage en route to the Singularity where networked, embedded, and location-aware microprocessors provide the basis for a global panopticon. (Vinge, 2000; Wallace, 2006) Vinge has remained steadfastly positive about the possibilities presaged in this era: “...collaborations will thrive. Remote helping flourishes; wherever you go, local experts can make you as effective as a native. We experiment with a thousand new forms of teamwork and intimacy.” (Vinge, 2000) Such systems are not only on the immediate horizon; they are patented and commercially available in the prototypes coming from the labs and companies founded by scientists such as Pentland, Maes and Rekimoto, each of whom is emphatic about the need to implement and insure privacy in the potentially panoptic systems they have developed. (Sense Networks, “Principles”). We need not fear the singularity; but beware the panopticon.