RoboJelly diagram
A computer-aided model of Robojelly shows the vehicle’s two bell-like structures.

POWERED by HYDROGEN & OXYGEN
http://www.utdallas.edu/news/2012/3/22-16551_Jellyfish-Robot-Powered-by-Hydrogen-and-Nanotechno_article-wide.html
Undersea Vehicle Powered by Hydrogen and Oxygen

Researchers at The University of Texas at Dallas and Virginia Tech have created an undersea vehicle inspired by the common jellyfish that runs on renewable energy and could be used in ocean rescue and surveillance missions. In a study published this week in Smart Materials and Structures, scientists created a robotic jellyfish, dubbed Robojelly, that feeds off hydrogen and oxygen gases found in water. “We’ve created an underwater robot that doesn’t need batteries or electricity,” said Dr. Yonas Tadesse, assistant professor of mechanical engineering at UT Dallas and lead author of the study. “The only waste released as it travels is more water.”


Engineers and scientists have increasingly turned to nature for inspiration when creating new technologies. The simple yet powerful movement of the moon jellyfish made it an appealing animal to simulate. The Robojelly consists of two bell-like structures made of silicone that fold like an umbrella. Connecting the umbrella are muscles that contract to move. In this study, researchers upgraded the original, battery-powered Robojelly to be self-powered. They did that through a combination of high-tech materials, including artificial muscles that contract when heated. These muscles are made of a nickel-titanium alloy wrapped in carbon nanotubes, coated with platinum and housed in a pipe. As the mixture of hydrogen and oxygen encounters the platinum, heat and water vapor are created. That heat causes a contraction that moves the muscles of the device, pumping out the water and starting the cycle again. “It could stay underwater and refuel itself while it is performing surveillance,” Tadesse said. In addition to military surveillance, Tadesse said, the device could be used to detect pollutants in water. Tadesse said the next step would be refining the legs of the devices to move independently, allowing the Robojelly to travel in more than one direction.

{Dr. Ray Baughman, the Robert A. Welch Distinguished Chair in Chemistry and director of the Alan G. MacDiarmid NanoTech Institute at UT Dallas, was an author of the study. The research was a collaboration between researchers at the University of Texas at Dallas and Virginia Polytechnic Institute and State University, Virginia Tech, including Dr. Shashank Priya, the study’s senior author. The study was funded by the Office of Naval Research.}

RoboJelly
The Robojelly, shown here out of water, has an outer structure made out of silicone.

LIMITLESS FUEL SOURCE
http://io9.com/5895083/when-the-earth-is-uninhabited-this-robotic-jellyfish-will-still-be-roaming-the-seas
When the Earth is uninhabited, this robotic jellyfish will still be roaming the seas
by Esther Inglis-Arkell  /  March 21, 2012

Virginia Tech and the University of Texas at Dallas have claimed their place as the leading purveyor of robot-based nautical doom with robojelly, a robot that simulates the look and the move of a cnidarian. Anyone who has seen jellies knows that they move with a repetitive contraction of their bells, or their transparent outer shells. This movement requires two motions: a contraction and a snap back to the original position. For this carbon nanotubule jellyfish, the engineers used a commercially available, shape memory, titanium-and-nickel alloy to mimic the snap back. The contraction was harder to engineer. The Robojelly needed muscles, so researchers used platinum-covered carbon nanotubes to cover the shape memory sheets. When hydrogen and oxygen gases in the water made contact with the platinum — which is in the form of black powder — they create a reaction that gives off heat. This causes the nickel-titanium alloy to contract. And since hydrogen and oxygen are in seawater, these jellies could roam the oceans indefinitely, with possible future tinkering.

The deformation of the bell, powered by this reaction, was found to be a modest 13.5%. An electro-robojelly can manage 29% and a biological one can get an impressive 42%, but neither of the latter can power themselves until judgment day.

CONTACT
Yonas Tadesse
http://me.utdallas.edu/people/tadesse.html
email : yonas.tadesse [at] utdallas [dot] edu

ABSTRACT
http://iopscience.iop.org/0964-1726/21/4/045013

“Artificial muscles powered by a renewable energy source are desired for joint articulation in bio-inspired autonomous systems. In this study, a robotic underwater vehicle, inspired by jellyfish, was designed to be actuated by a chemical fuel source. The fuel-powered muscles presented in this work comprise nano-platinum catalyst-coated multi-wall carbon nanotube (MWCNT) sheets, wrapped on the surface of nickel–titanium (NiTi) shape memory alloy (SMA). As a mixture of oxygen and hydrogen gases makes contact with the platinum, the resulting exothermic reaction activates the nickel–titanium (NiTi)-based SMA. The MWCNT sheets serve as a support for the platinum particles and enhance the heat transfer due to the high thermal conductivity between the composite and the SMA. A hydrogen and oxygen fuel source could potentially provide higher power density than electrical sources. Several vehicle designs were considered and a peripheral SMA configuration under the robotic bell was chosen as the best arrangement. Constitutive equations combined with thermodynamic modeling were developed to understand the influence of system parameters that affect the overall actuation behavior of the fuel-powered SMA. The model is based on the changes in entropy of the hydrogen and oxygen fuel on the composite actuator within a channel. The specific heat capacity is the dominant factor controlling the width of the strain for various pulse widths of fuel delivery. Both theoretical and experimental strains for different diameter (100 and 150 µm) SMA/MWCNT/Pt fuel-powered muscles with dead weight attached at the end exhibited the highest magnitude under 450 ms of fuel delivery within 1.6 mm diameter conduit size. Fuel-powered bell deformation of 13.5% was found to be comparable to that of electrically powered (29%) and natural jellyfish (42%).”

SEE ALSO

AMOEBOID ROBOTS  NAVIGATE without BRAIN
http://www.technologyreview.com/blog/mimssbits/27638/
Japanese researcher draws inspiration from slime mold cognition
by Christopher Mims  / 03/09/2012

A new blob-like robot described in the journal Advanced Robotics uses springs, feet, “protoplasm” and a distributed nervous system to move in a manner inspired by the slime mold Physarum polycepharumWatch it ooze across a flat surface, The Blob style:

Skip to 1:00 if you just want to be creeped out by its life-like quivering. (And if anyone can explain why, aside from wanting to kill its creepiness, the researcher stabs it with a pen-knife at 1:40, let me know in the comments.) Researcher Takuya Umedachi of Hiroshima University has been perfecting his blob-bot for years, starting with early prototypes that used springs but lacked an air-filled bladder.

This model didn’t work nearly as well, demonstrating, I guess, the need for a fluid or air-filled sack when you’re going to project your soft-bodied self in a new direction. (Hydraulic pressure is, after all, how our tongues work.) Umedachi modeled his latest version on the “true” slime mold, which has been shown to achieve a “human-like” decision-making capacity through properties emerging from the interactions of its individual spores. Slime molds appear to have general computational abilities, and you’ve probably heard that they can solve mazes. Here’s what they look like in the wild.


Yellow slime mold (detail) by frankenstoen


Yellow slime mold by frankenstoen

Soft-bodied robots can do things their rigid, insectoid brethren can’t, like worm their way into tight spots and bounce back in the face of physical insult. Umedachi’s goal isn’t simply to create a new kind of locomotion, however. He’s exploring the way in which robots that lack a centralized command center — i.e. a brain — can accomplish things anyway. Slime molds are a perfect model for this sort of thing, because they don’t even have the primitive neural nets that characterize the coordinated swimming and feeding actions in jellyfish.

From the abstract:

A fully decentralized control using coupled oscillators with a completely local sensory feedback mechanism is realized by exploiting the global physical interaction between the body parts stemming from the fluid circuit. The experimental results show that this robot exhibits adaptive locomotion without relying on any hierarchical structure. The results obtained are expected to shed new light on the design scheme for autonomous decentralized control systems.

Simulations indicate that the robot should be highly adaptable to deformation — i.e., squeezing through tight spaces.


For a full account of the ways that Umedachi plans to reproduce the world’s most primitive form of cognition in robots, here’s a 2011 talk on the subject by the professor himself.

SILENCE ENFORCEMENT DEVICE
http://www.extremetech.com/computing/120583-new-speech-jamming-gun-hints-at-dystopian-big-brother-future

Japanese researchers have created a hand-held gun that can jam the words of speakers who are more than 30 meters (100ft) away. The gun has two purposes, according to the researchers: At its most basic, this gun could be used in libraries and other quiet spaces to stop people from speaking — but its second application is a lot more chilling.

The researchers were looking for a way to stop “louder, stronger” voices from saying more than their fair share in conversation. The paper reads: “We have to establish and obey rules for proper turn-taking when speaking. However, some people tend to lengthen their turns or deliberately interrupt other people when it is their turn in order to establish their presence rather than achieve more fruitful discussions. Furthermore, some people tend to jeer at speakers to invalidate their speech.” In other words, this speech-jamming gun was built to enforce “proper” conversations.

The gun works by listening in with a directional microphone, and then, after a short delay of around 0.2 seconds, playing it back with a directional speaker. This triggers an effect that psychologists call Delayed Auditory Feedback (DAF), which has long been known to interrupt your speech (you might’ve experienced the same effect if you’ve ever heard your own voice echoing through Skype or another voice comms program). According to the researchers, DAF doesn’t cause physical discomfort, but the fact that you’re unable to talk is obviously quite stressful.

Suffice it to say, if you’re a firm believer in free speech, you should now be experiencing a deafening cacophony of alarm bells. Let me illustrate a few examples of how this speech-jamming gun could be used. At a political rally, an audience member could completely lock down Santorum, Romney, Paul, or Obama from speaking. On the flip side, a totalitarian state could point the speech jammers at the audienceto shut them up. Likewise, when a celebrity or public figure appears on a live TV show, his contract could read “the audience must be silenced with speech jammers.”

Then there’s Harrison Bergeron, one of my favorite short stories by Kurt Vonnegut. In the story’s dystopian universe, everyone wears “handicaps” to ensure perfect social equality. Strong people must lug around heavy weights, beautiful people must wear masks, and intelligent people must wear headphones that play a huge blast of sound every few seconds, interrupting your thoughts. The more intelligent you are, the more regular the blasts.

Back here in our universe, it’s not hard to imagine a future where we are outfitted with a variety of implanted electronics or full-blown bionic organs. Just last week we wrote about Google’s upcoming augmented-reality glasses, which will obviously have built-in earbuds. Late last year we covered bionic eyesthat can communicate directly with the brain, and bionic ears and noses can’t be far off.

In short, imagine if a runaway mega-corporation or government gains control of these earbuds. Not only could the intelligence-destroying blasts from Harrison Bergeron come to pass, but with Delayed Auditory Feedback it would be possible to render the entire population mute. Well, actually, that’s a lie: Apparently DAF doesn’t work with utterances like “ahhh!” or “boooo!” or other non-wordy constructs. So, basically, we’d all be reduced to communicating with grunts and gestures.

SPEECH-JAMMING
http://www.technologyreview.com/blog/arxiv/27620/
How to Build a Speech-Jamming Gun
Japanese researchers build a gun capable of stopping speakers in mid-sentence / 03/01/2012

The drone of speakers who won’t stop is an inevitable experience at conferences, meetings, cinemas, and public libraries. Today, Kazutaka Kurihara at the National Institute of Advanced Industrial Science and Technology in Tskuba and Koji Tsukada at Ochanomizu University, both in Japan, present a radical solution: a speech-jamming device that forces recalcitrant speakers into submission.

The idea is simple. Psychologists have known for some years that it is almost impossible to speak when your words are replayed to you with a delay of a fraction of a second. Kurihara and Tsukada have simply built a handheld device consisting of a microphone and a speaker that does just that: it records a person’s voice and replays it to them with a delay of about 0.2 seconds. The microphone and speaker are directional so the device can be aimed at a speaker from a distance, like a gun.

In tests, Kurihara and Tsukada say their speech jamming gun works well: “The system can disturb remote people’s speech without any physical discomfort.” Their tests also identify some curious phenomena. They say the gun is more effective when the delay varies in time and more effective against speech that involves reading aloud than against spontaneous monologue.

Kurihara and Tsukada make no claims about the commercial potential of their device but list various aplications. They say it could be used to maintain silence in public libraries and to “facilitate discussion” in group meetings. “We have to establish and obey rules for proper turn-taking when speaking,” they say. That has important implications. “There are still many cases in which the negative aspects of speech become a barrier to the peaceful resolution of conflicts, ” they point out.

CONTACT
Kazutaka Kurihara
http://sites.google.com/site/qurihara/top-english
email : k-kurihara [ at ] aist.go.jp

Koji Tsukada
http://mobiquitous.com/index-e.html
email : tsuka [at] mobiquitous [dot] com

ABSTRACT
http://arxiv.org/abs/1202.6106
SpeechJammer: A System Utilizing Artificial Speech Disturbance with Delayed Auditory Feedback
by Kazutaka Kurihara and Koji Tsukada / 28 Feb 2012

“In this paper we report on a system, “SpeechJammer”, which can be used to disturb people’s speech. In general, human speech is jammed by giving back to the speakers their own utterances at a delay of a few hundred milliseconds. This effect can disturb people without any physical discomfort, and disappears immediately by stop speaking. Furthermore, this effect does not involve anyone but the speaker. We utilize this phenomenon and implemented two prototype versions by combining a direction-sensitive microphone and a direction-sensitive speaker, enabling the speech of a specific person to be disturbed. We discuss practical application scenarios of the system, such as facilitating and controlling discussions. Finally, we argue what system parameters should be examined in detail in future formal studies based on the lessons learned from our preliminary study.”

SPEECHJAMMER
http://www.wired.com/underwire/2012/03/japanese-speech-jamming-gun/

Two Japanese researchers recently introduced a prototype for a device they call a SpeechJammer that can literally “jam” someone’s voice — effectively stopping them from talking. Now they’ve released a video of the device in action. “We have to establish and obey rules for proper turn-taking,” write Kazutaka Kurihara and Koji Tsukada in their article on the SpeechJammer (PDF). “However, some people tend to lengthen their turns or deliberately disrupt other people when it is their turn … rather than achieve more fruitful discussions.”

The researchers released the video after their paper went viral Thursday, to the authors’ apparent surprise. “Do you know why our project is suddenly becoming hot now?” asked Kurihara, a research scientist at the National Institute of Advanced Industrial Science and Technology in Tsukuba, in an e-mail exchange with Wired.com. (Kurihara’s partner Tsukada is an assistant professor at Ochanomizu University in Tokyo.)

The design of the SpeechJammer is deceptively simple. It consists of a direction-sensitive microphone and a direction-sensitive speaker, a motherboard, a distance sensor and some relatively straightforward code. The concept is simple, too — it operates on the well-studied principle of delayed auditory feedback. By playing someone’s voice back to them, at a slight delay (around 200 milliseconds), you can jam a person’s speech.

Sonic devices have popped up in pop culture in the past. In sci-fi author J.G. Ballard’s short story “The Sound-Sweep,” published in 1960, a vacuum cleaner called a “sonovac” sweeps up the debris of old sounds. The wily German composer Karlheinz Stockhausen had plans for a “sound swallower,” which would cancel unwanted sounds in the environment using the acoustic principle of destructive interference. And in 1984 German film Decoder, special yellow cassette tapes play “anti-Muzak” that destroys the lulling tones of Muzak, stimulating diners at a fast-food restaurant to throw up en masse and start rioting.

But instead of sci-fi, the Japanese researchers behind the SpeechJammer looked to medical devices used to help people with speech problems. Delayed auditory feedback, or DAF, devices have been used to help stutterers for decades. If a stutterer hears his own voice at a slight delay, stuttering often improves. But if a non-stutterer uses a DAF device designed to help stutterers, he can start stuttering — and the effect is more pronounced if the delay is longer, up to a certain point.

“We utilized DAF to develop a device that can jam remote physically unimpaired people’s speech whether they want it or not,” write the researchers. “[The] device possesses one characteristic that is different from the usual medical DAF device; namely, the microphone and speaker are located distant from the target.”

Being at a distance from the target means it’s possible to aim the device at people who are several feet away — sort of like a TV B-Gone, but for people. Bothered by what someone at a meeting is saying? Point the SpeechJammer at him. Can’t stand your nattering in-laws? Time for the SpeechJammer. In the wrong hands — criminals, for instance, or repressive governments — the device could have potentially sinister applications. For now, it remains a prototype.

INSPIRATION
http://www.wired.com/underwire/2012/03/speech-jamming-gun-inspiration/

“One day I just came by a science museum and enjoyed a demonstration about Delayed Auditory Feedback (DAF) at [the] cognitive science corner,” says Kurihara. “When I spoke to a microphone, my voice came back to me after a few hundred millisecond delay. Then, I could not continue to speak any more. That’s fun!”

Kurihara soon realized his adventures in the science museum could be applicable to other fields. He was already interested in developing a system that “controls appropriate turn-taking at discussions.” The science museum visit was his “aha!” moment. “Then I came up with the gun-type SpeechJammer idea utilizing DAF,” says Kurihara. “That’s the destiny.”

Kurihara enlisted the talents of Koji Tsukada, an assistant professor at Tokyo’s Ochanamizu University who he calls “the gadget master.” Tsukada has been involved in a number of strange and intriguing projects, including the LunchCommunicator, a “lunchbox-type device which supports communication between family members”; the SmartMakeupSystem, which “helps users find new makeup methods for use with their daily cosmetics”; and the EaTheremin, a “fork-type instrument that enables users to play various sounds by eating foods”.

Tsukada introduced Kurihara to a parametric speaker kit, which they could use to convey sound in a very direction-sensitive way. “After I explained him my idea, he soon agreed to join my project,” says Kurihara. “It was a marriage between science and gadgets!”

As for SpeechJammer’s potentially sinister uses? “We hope SpeechJammer is used for building the peaceful world,” says Kurihara. The world can only hope.

Scientists say they’re getting closer to Matrix-style instant learning
http://io9.com/5867113/scientists-say-theyre-paving-the-way-towards-matrix+style-learning–but-is-it-safe

What price effortless learning? In a paper published in the latest issue of Science, neuroscientists say they’ve developed a novel method of learning, that can cause long-lasting improvement in tasks that demand a high level of visual performance. And while the so-called neurofeedback method could one day be used to teach you kung fu, or to aid spinal-injury patients on the road to rehabilitation, evidence also suggests the technology could be used to target people without their knowledge, opening doors to numerous important ethical questions. According to a press release from the National Science Foundation:

New research published today in the journal Science suggests it may be possible to use brain technology to learn to play a piano, reduce mental stress or hit a curve ball with little or no conscious effort. It’s the kind of thing seen in Hollywood’s “Matrix” franchise.

Experiments conducted at Boston University (BU) and ATR Computational Neuroscience Laboratories in Kyoto, Japan, recently demonstrated that through a person’s visual cortex, researchers could use decoded functional magnetic resonance imaging (fMRI) to induce brain activity patterns to match a previously known target state and thereby improve performance on visual tasks.

Think of a person watching a computer screen and having his or her brain patterns modified to match those of a high-performing athlete or modified to recuperate from an accident or disease. Though preliminary, researchers say such possibilities may exist in the future.

But here’s the bit that’s really interesting (and also pretty creepy): the researchers found that this novel learning approach worked even when test subjects weren’t aware of what they were learning:

“The most surprising thing in this study is that mere inductions of neural activation patterns…led to visual performance improvement…without presenting the feature or subjects’ awareness of what was to be learned,” said lead researcher Takeo Watanabe. He continues:

We found that subjects were not aware of what was to be learned while behavioral data obtained before and after the neurofeedback training showed that subjects’ visual performance improved specifically for the target orientation, which was used in the neurofeedback training.

Is this research mind-blowing and exciting? Absolutely. I mean come on — automated learning? Yes. Sign me up. But according to research co-author Mitsuo Kawato, the neurofeedback mechanism could just as soon be used for purposes of hypnosis or covert mind control. And that… I’m not so keen on. “We have to be careful,” he explains, “so that this method is not used in an unethical way.”

VISUAL PERCEPTUAL LEARNING
http://www.nsf.gov/news/news_videos.jsp?cntn_id=122523&media_id=71600
http://www.nsf.gov/news/news_summ.jsp?cntn_id=122523
New research suggests it may be possible to learn high-performance tasks with little or no conscious effort / December 8, 2011

New research published today in the journal Science suggests it may be possible to use brain technology to learn to play a piano, reduce mental stress or hit a curve ball with little or no conscious effort. It’s the kind of thing seen in Hollywood’s “Matrix” franchise. Experiments conducted at Boston University (BU) and ATR Computational Neuroscience Laboratories in Kyoto, Japan, recently demonstrated that through a person’s visual cortex, researchers could use decoded functional magnetic resonance imaging (fMRI) to induce brain activity patterns to match a previously known target state and thereby improve performance on visual tasks.

Think of a person watching a computer screen and having his or her brain patterns modified to match those of a high-performing athlete or modified to recuperate from an accident or disease. Though preliminary, researchers say such possibilities may exist in the future. “Adult early visual areas are sufficiently plastic to cause visual perceptual learning,” said lead author and BU neuroscientist Takeo Watanabe of the part of the brain analyzed in the study. Neuroscientists have found that pictures gradually build up inside a person’s brain, appearing first as lines, edges, shapes, colors and motion in early visual areas. The brain then fills in greater detail to make a red ball appear as a red ball, for example. Researchers studied the early visual areas for their ability to cause improvements in visual performance and learning. “Some previous research confirmed a correlation between improving visual performance and changes in early visual areas, while other researchers found correlations in higher visual and decision areas,” said Watanabe, director of BU’s Visual Science Laboratory. “However, none of these studies directly addressed the question of whether early visual areas are sufficiently plastic to cause visual perceptual learning.” Until now.

Boston University post-doctoral fellow Kazuhisa Shibata designed and implemented a method using decoded fMRI neurofeedback to induce a particular activation pattern in targeted early visual areas that corresponded to a pattern evoked by a specific visual feature in a brain region of interest. The researchers then tested whether repetitions of the activation pattern caused visual performance improvement on that visual feature. The result, say researchers, is a novel learning approach sufficient to cause long-lasting improvement in tasks that require visual performance. What’s more, the approached worked even when test subjects were not aware of what they were learning.

“The most surprising thing in this study is that mere inductions of neural activation patterns corresponding to a specific visual feature led to visual performance improvement on the visual feature, without presenting the feature or subjects’ awareness of what was to be learned,” said Watanabe, who developed the idea for the research project along with Mitsuo Kawato, director of ATR lab and Yuka Sasaki, an assistant in neuroscience at Massachusetts General Hospital. “We found that subjects were not aware of what was to be learned while behavioral data obtained before and after the neurofeedback training showed that subjects’ visual performance improved specifically for the target orientation, which was used in the neurofeedback training,” he said.

The finding brings up an inevitable question. Is hypnosis or a type of automated learning a potential outcome of the research? “In theory, hypnosis or a type of automated learning is a potential outcome,” said Kawato. “However, in this study we confirmed the validity of our method only in visual perceptual learning. So we have to test if the method works in other types of learning in the future. At the same time, we have to be careful so that this method is not used in an unethical way.”

CONTACT
Takeo Watanabe
http://www.bu.edu/psych/faculty/takeo/
http://www.bu.edu/visionlab/
email: takeo [at} bu [dot] edu

ABSTRACT
http://www.sciencemag.org/content/334/6061/1413.abstract

“It is controversial whether the adult primate early visual cortex is sufficiently plastic to cause visual perceptual learning (VPL). The controversy occurs partially because most VPL studies have examined correlations between behavioral and neural activity changes rather than cause-and-effect relationships. With an online-feedback method that uses decoded functional magnetic resonance imaging (fMRI) signals, we induced activity patterns only in early visual cortex corresponding to an orientation without stimulus presentation or participants’ awareness of what was to be learned. The induced activation caused VPL specific to the orientation. These results suggest that early visual areas are so plastic that mere inductions of activity patterns are sufficient to cause VPL. This technique can induce plasticity in a highly selective manner, potentially leading to powerful training and rehabilitative protocols.”

or HEY COOL MY TRACKING DEVICE has a DRUM MACHINE (and GPS!)
http://androidsecuritytest.com/features/logs-and-services/loggers/carrieriq/

NOW with KEYSTROKE LOGGING
http://www.informationweek.com/news/security/mobile/231903096
Smartphone Invader Tracks Your Every Move
Carrier IQ software, installed on more than 141 million mobile phones, tracks GPS location, websites visited, search queries, and all keys pressed.
by Mathew J. Schwartz  /  November 16, 2011

Software on many smartphones is tracking every move and website visited, without the knowledge of the phone’s user. And that information is being collected by a little known company, which could be sharing it with law enforcement agencies without requiring a subpoena and without keeping a record of the query.That’s among the conclusions that can be drawn from the discovery of a rootkit that’s running on a number of Verizon and Sprint phones, which tracks not just phone numbers dialed, but also the user’s GPS coordinates, websites visited, keys pressed, and many website searches, according to security researcher Trevor Eckhart. He discovered the rootkit after tracing suspicious network activity in a data center that he manages, and which he suspected related to a virus infection. But he traced the activity back to software made by Carrier IQ, which describes its “mobile service delivery” software as being a tool for measuring smartphone service quality and usage using software embedded in handsets. “The Carrier IQ solution gives you the unique ability to analyze in detail usage scenarios and fault conditions by type, location, application, and network performance while providing you with a detailed insight into the mobile experience as delivered at the handset rather than simply the state of the network components carrying it,” according to the website.

Carrier IQ software runs on 141 million handsets. In the United States, it ships installed by default on many handsets sold via Sprint and Verizon, and runs on a number of platforms, including Android, BlackBerry, and Nokia smartphones and tablets. Rather than carriers using Carrier IQ software to collect data and then store it themselves, it appears that Carrier IQ handles both the data collection and related analytics. According to the company’sprivacy and security policy, “information transmitted from enabled mobile devices is stored in a secure data center facility that meets or exceeds industry best practice guidelines for security policies and procedures.” The policy doesn’t detail those policies and procedures.

Eckhart said in an interview that the software is often configured by carriers to hide its presence from users. That means it functions per the Wikipedia definition of a rootkit: “Software that enables continued privileged access to a computer while actively hiding its presence from administrators by subverting standard operating system functionality or other applications.” The software, however, doesn’t have to be stealthy. Eckhart said that the default version of Carrier IQ “makes its presence known by putting a checkmark in the status bar,” and can generate surveys if calls get dropped or browsers crash unexpectedly, to help engineers identify the underlying problem. Still, after reviewing public-facing training videos he found online, Eckhart said he was alarmed to see just how much data was being gathered by Carrier IQ, and how easily it could be searched en masse–all of which makes him suspicious about how the data is being used. “If this was just legit use, say monitoring dropped calls, why would all on/off switches be stripped and made completely invisible? Users should always have an option to ‘opt-in’ to a program. There are obviously other uses,” he said. “It is a massive invasion of privacy.”

Carrier IQ makes the information it collects available to its customers via a portal. Eckhart said in a blog post that “from leaked training documents we can see that portal operators can view and [search] metrics by equipment ID, subscriber ID, and more.” As a result, anyone with access to the portal can “know ‘Joe Anyone’s’ location at any given time, what he is running on his device, keys being pressed, applications being used,” he said. Carrier IQ spokeswoman Mira Woods said, “Our customers select which metrics they need to gather based on their business need–such as network planning, customer care, device performance–within the bounds of the agreement they form with their end users. These business rules are translated into a profile, placed on the device which provides instructions on what metrics to actually gather.” She said that all collected data gets transmitted by Carrier IQ to carriers using a “secure encrypted channel,” at which point they typically use it for customer service or analyzing network performance. “The further processing or reuse of this data is subject to the agreement formed between our customer and their end user (of the mobile device) and the applicable laws of the country in which they are operating,” she said.

One concern for privacy advocates, however, is that carriers apparently share information of the type collected by this software freely with law enforcement agencies. Notably, research published by privacy expert Christopher Soghoian in 2009 found that Sprint had shared customers’ GPS location information with law enforcement agencies more than 8 million times over a 13-month period. Sprint had also developed tools to automatically fulfill the large volume of law enforcement agency requests, which seem to occur in a legal gray area that results in none of the requests or shared data queries being recorded. Eckhart said the information being collected by Carrier IQ was even more expansive than what Sprint had shared in 2009. “We can see from the dashboard that GPS data can be viewed historically or in real time by date, time, whatever. That makes for a very efficient law enforcement portal, just like what’s detailed being blatantly abused in Soghoian’s article. It also relates to how Verizon is gathering info for their new ad tracking program,” he said. “Things like exact keypress data being stored as well shows this. What use would what words I’m typing ever be to ‘network performance’? Maybe words per minute would be useful, but it’s not that–it’s an exact record of what you are typing.”

Verizon has publicly acknowledged that it uses Carrier IQ statistics, both for mobile usage information (device location, app and feature usage, and website addresses, which may include search string) as well as consumer information (use of Verizon products and services, as well as demographic information, such as gender, age, and dining preferences). It also offers customers a way to opt out of this usage. Meanwhile, “Sprint is known to collect Carrier IQ data because users have the application running reporting to them, but have no privacy policy, retention policy, or public information on what they use the data for,” said Eckhart. But Sprint spokesman Jason Gertzen said via email that Sprint uses the information for diagnostic purposes. “Carrier IQ provides information that allows us to analyze our network performance and identify where we should be improving service. We also use the data to understand device performance so we can figure out when issues are occurring,” he said. “The information collected is not sold and we don’t provide a direct feed of this data to anyone outside of Sprint.” Deactivating installed Carrier IQ software can be difficult, at least as implemented by many carriers. While Samsung Android devices offer a somewhat hidden Carrier IQ on/off switch, HTC Android devices offer no such feature. Accordingly, if you buy an ex-Sprint phone off of eBay and Carrier IQ software is installed, you’re being tracked, said Eckhart. But Carrier IQ’s Woods said that her company’s software is set to disable data collection if the device’s SIM card or mobile carrier changes.

How can you determine if the software is running on a device? “Logging TestApp scanner will detect it in the kernel–use ‘Check Props’ feature–as well as files used in the regular Loggers scan,” said Eckhart. He’s the developer behind Logging TestApp, which can also be used to reveal the Carrier IQ menus often hidden by carriers when they roll out the application. If Carrier IQ is found and isn’t wanted, deleting it can also be difficult. “The only way to remove Carrier IQ is with advanced skills. If you choose to void your warranty and unlock your bootloader you can (mostly) remove Carrier IQ,” he said. “Logging TestApp can identify files used in logging and you can manually patch or use [the] Pro version to automatically remove [them].”

Android expert Tim Schofield has also released a YouTube video showing how to remove Carrier IQ from the Samsung Epic 4G running Android Gingerbread 2.3.5, but warned that it would require flashing the ROM. “What [Carrier IQ] does is log things you do and send it to Sprint, so it’s like a spyware thing that you don’t want on your phone,” he said.

Samsung screenshots thanks to k0nane on XDA See the full post where he removed carrier IQ here

CARRIER IQ

http://www.theinquirer.net/inquirer/news/2125853/carrier-iq-defends-android-rootkits-accusation
Carrier IQ defends against Android rootkits accusation
Handset makers and carriers to blame
by Lawrence Latif / Nov 17 2011

MOBILE ANALYTICS OUTFIT Carrier IQ is facing a growing firestorm over its secretive analytics software that is deeply embedded into mobile operating systems such as Google’s Android. Carrier IQ, which claims to provide ‘mobile intelligence’, has been accused of supplying rootkits that track user interactions on smartphones. Carrier IQ’s software is found on many operating systems including Google’s Android and records application runtimes, media playback, location satistics and when calls are received.

An investigation conducted by the smart chaps at XDA-Developers brought Carrier IQ’s activities to light, with the investigators labeling the software as a rootkit. It also found that stopping the service was not a trivial matter, since it’s hidden under several layers of abstraction.
Carrier IQ became aware of the growing backlash against its software and issued a release in which it claimed device manufacturers use its software to “improve the quality of the network, understand device issues and ultimately improve the user experience”. It went on to categorically deny that it was tracking keystrokes or providing tracking tools.
As for the data collected by Carrier IQ’s software, the firm went on to say, “Our customers have stringent policies and obligations on data collection and retention. Each customer is different and our technology is customized to their exacting needs and legal requirements.”

Being fair to Carrier IQ, it is not secretly splicing in tracker-ware into its products like Sony did, rather carriers and handset makers are opting to include the software without informing users. The handset makers should be questioned as to their motives for including such software and asked to provide detailed documents listing what they collect, what they do with the information and how long the information is stored. Whatever the reason for including Carrier IQ’s software, the facts are that users were unaware of it and it is engineered to be extremely difficult to remove. Those facts alone are enough to warrant serious concern.


Carrier IQ markets its software as a "mobile service intelligence solution" on its Web site. "We give wireless carriers and handset manufacturers unprecedented insight into their customers' mobile experience."

Carrier IQ markets its software as a “mobile service intelligence solution” on its Web site. “We give wireless carriers and handset manufacturers unprecedented insight into their customers’ mobile experience.” (Credit: Carrier IQ)

CONTACT
http://www.carrieriq.com/contact/index.htm

YOU SAY ‘ROOTKIT,’ I SAY ‘DIAGNOSTIC TOOL’
http://news.cnet.com/8301-27080_3-57326974-245/you-say-rootkit-i-say-diagnostic-tool/
by Elinor Mills /  November 17, 2011

Android developer Trevor Eckhart recently noticed something odd on several EVO HTC devices: hidden software that phoned home to the carrier with details about how the phone was being used and where it was. The software, Carrier IQ, tracked the location of the phone, what keys were pressed, which Web pages were visited, when calls were placed, and other information on how the device is used and when.

Eckhart discovered that Carrier IQ can be shown as present on the phone to users or configured as hidden, which was the case on the HTC phones he analyzed. And he found what he described as “leaked training documents” that indicate that carriers can view customer usage information via a remote portal that displays devices by equipment ID and subscriber ID. “The only way to remove Carrier IQ is with advanced skills,” Eckhart wrote in a report,published on the Web on Monday. “If you choose to void your warranty and unlock your bootloader you can (mostly) remove Carrier IQ.” Sprint, meanwhile, “has no privacy policy, retention policy, or public information on what they use the data for,” Eckhart wrote.

HTC Android devices have no on-off switch for Carrier IQ, while Samsung devices do, but it is not easily accessible or pointed out to users, he said. Because customers do not give explicit permission for this data collection and don’t even know this software is on their phones, and they can’t opt out of it, Eckhart says it is a clear privacy violation. He likens Carrier IQ to malware. “Carrier IQ is rootkit software,” he wrote in his report. “It listens on the phones for commands contained in ‘tasking profiles’ sent a number of ways and returns whatever ‘metric’ was asked for.”

According to Wikipedia, a rootkit is software “that enables continued privileged access to a computer while actively hiding its presence from administrators by subverting standard operating system functionality or other applications.” Typically, hackers install a rootkit onto a target system by exploiting a software vulnerability or using a stolen password. They are characterized by stealth and malicious purpose. Definitions aside, the types of data gathered is enough to set off alarms for privacy minded folk. “If it’s just for ‘network performance’ why wouldn’t they give users a choice?” Eckhart said in an e-mail to CNET late last night. “Any program logging this extent of personal information should always be opt-in.”

A Sprint spokesman provided a general statement about the use of Carrier IQ, but did not provide comment to follow-up questions about whether customers know about the data collection and why they can’t opt out. Here is the Sprint statement:

“Carrier IQ provides information that allows Sprint, and other carriers that use it, to analyze our network performance and identify where we should be improving service. We also use the data to understand device performance so we can figure out when issues are occurring. We collect enough information to understand the customer experience with devices on our network and how to address any connection problems, but we do not and cannot look at the contents of messages, photos, videos, etc., using this tool. The information collected is not sold and we don’t provide a direct feed of this data to anyone outside of Sprint.

Sprint maintains a serious commitment to respecting and protecting the privacy and security of each customer’s personally identifiable information and other customer data. A key element of this involves communicating with our customers about our information privacy practices. The Sprint privacy policy makes it clear we collect information that includes how a device is functioning and how it is being used. Carrier IQ is an integral part of the Sprint service.”

Carrier IQ representatives said the data carriers collect with their software has a legitimate purpose and is handled responsibly. “We are collecting information that would be regarded by most people as sensitive,” Andrew Coward, vice president of marketing for Carrier IQ, told CNET today. “So we work within the network of the operator or in the facilities [they approve] and which are up to their standards as far as data retention” and encryption.

Mountain View, Calif.-based Carrier IQ launched six years ago expressly to offer software that serves as an “embedded performance management tool,” he said. “This has caught us off guard in that the technology has been around a long time,” he added. “We’re in the business of counting things that happen on the phone to help carriers improve service.” For example, knowing exactly where a phone call was dropped can help a carrier identify network troubles in a geographic location. “We do want to know when you’ve had a dropped call, if an SMS didn’t work and if you’ve got battery life problems,” Coward said.

Information on keys that are pressed and how many times the phone is charged can provide activity information over the life of a phone, which is important for device manufacturers, he said. “We are not interested and do not gather the text or the text message and do not have the capacity to do that,” he said. Processing specific data like that from millions of devices would be impractical to do, he said. In addition, the data logged is not real-time in Carrier IQ, which diminishes its usefulness, and carriers have other ways of getting sensitive user data if they want, according to Coward. “You can’t make a phone call on the mobile network without them knowing where you are,” he said. “Our customers believe that they have obtained permission from their customers to gather this performance data.”

But Eckhart questioned the legality of carriers collecting keypresses and some of the other information. “As far as Sprint, the data they are logging is very personal,” he said in his e-mail. “How do we know who is getting this? Every customer service personnel? Law enforcement? Is my location and browsing history stored forever?”

It’s unclear what devices have Carrier IQ software installed. Coward said Carrier IQ is used by more than a dozen device manufacturers, including smartphones and tablets, but he declined to name the companies or devices. Eckhart names HTC, Samsung, Nokia, BlackBerry, Sprint, and Verizon in his report on Carrier IQ. HTC did not respond to requests for comment and a Samsung representative said she would try to get comment. But a Verizon representative said the company does not use Carrier IQ on its devices and Coward confirmed that. (Eckhart’s report linked to this Verizon Web page that talks about collecting data on phone location, Web sites visited and other information.) Eckhart did not immediately respond to e-mails and phone calls seeking a follow-up interview today. In the paranoid world of security researchers, the notion of privacy is nine-tenths perception and potential. Carriers should make it clear what data they are collecting and what benefit doing so provides to the customers. And, if possible, it should be opt in.

COMPLETE LIST of ALL PHONES with CARRIER IQ SPYWARE INSTALLED
http://gizmodo.com/5868732/the-complete-list-of-all-the-phones-with-carrier-iq-spyware-installed

Responding to the US Senate request lead by Senator Al Franken, AT&T, Sprint, HTC, and Samsung have sent the list of all the phones with Carrier IQ spyware installed in them.

The carriers have also admitted that Carrier IQ also captured the content of text messages “under certain conditions.”

Here’s the complete list:

AT&T
AT&T claims about 900,000 users using phones with Carrier IQ. The software is active on eleven AT&T wireless consumer devices:

• Motorola Atrix 2
• Motorola Bravo
• Pantech Pursuit II
• Pantech Breeze 3
• Pantech P5000 (Link 2)
• Pantech Pocket
• Sierra Wireless Shockwave
• LG Thrill
• ZTE Avail
• ZTE Z331
• SEMC Xperia Play

It’s also installed but not active “due to the potential for the software agent to interfere with the performance” of the following phones:

• HTC Vivid
• LG Nitro
• Samsung Skyrocket

Carrier IQ is also packaged in the free AT&T Mark the Spot application, available for Android and RIM.

Sprint
26 million active Sprint devices have the Carrier IQ software installed, says Sprint. That’s almost half of all their subscribers, 53.4 million customers, so you can assume that they have it installed in all the Android phones of the manufacturers Sprint reported to the US senate:

• Audiovox
• Franklin
• HTC
• Huawei
• Kyocera
• LG
• Motorola
• Novatel
• Palmone
• Samsung
• Sanyo
• Sierra Wireless

Samsung
Samsung claims 25 million phones affected. It has directly installed Carrier IQ at the factory in the following models:

Sprint
• SPH-M800 (Samsung Instinct)
• SPH-M540 (Samsung Rant)
• SPH-M630 (Samsung Highnote)
• SPH-M810 (Samsung Instinct s30)
• SPH-M550 (Samsung Exclaim)
• SPH-M560 (Samsung Reclaim)
• SPH-M850 (Samsung Instinct HD)
• SPH-I350 (Samsung Intrepid)
• SPH-M900 (Samsung Moment)
• SPH-M350 (Samsung Seek)
• SPH-M570 (Samsung Restore)
• SPH-D700 (Samsung Epic 4G)
• SPH-M910 (Samsung Intercept)
• SPH-M920 (Samsung Transform)
• SPH-M260 (Samsung Factor)
• SPH-M380 (Samsung Trender)
• SPH-M820 (Samsung Galaxy Prevail)
• SPH-M580 (Samsung Replenish)
• SPH-D600 (Samsung Conquer 4G)
• SPH-M930 (Samsung Transform Ultra)
• SPH-D710 (Samsung Epic 4G Touch)
• SPH-M220
• SPH-M240
• SPH-M320
• SPH-M330
• SPH-M360
• SPH-P100
• SPH-Z400

T-Mobile
•T989 (Samsung Hercules)
•T679 (Samsung Galaxy W)

Cricket
• SCH-R500 (Samsung Hue)
• SCH-R631 (Samsung Messager Touch)
• SCH-R261 (Samsung Chrono)
• SCH-R380 (Samsung Freeform III)

AT&T
• SGH-i727 (Samsung Galaxy S II Skyrocket)

HTC
HTC preinstalled Carrier IQ spyware on about 6.3 million Android phones:

Sprint
• Snap
• Touch Pro 2
• Hero
• EVO 4G
• EVO Shift 4G
• EVO Design

T-Mobile
• Amaze 4G

AT&T
• Vivid

What is Carrier IQ?
Carrier IQ logs information about your whereabouts as well as other personal data such as browsing history, application usage and phone numbers.

The Carrier IQ application also captures the content of your text messages, according to AT&T. This happens when you are talking on the phone and you sned or receive a text message: “the CIQ software also captured the content of SMS text messages—when and only when—such messages were sent or received while a voice call was in progress.” [US Senator Al Franken’s responseAT&T Response (PDF)Sprint Response (PDF)Samsung Response (PDF)HTC Response (PDF)CarrierIQ response (PDF), via Verge and Business Week]

LOOK HERE
http://www.nydailynews.com/ny_local/2010/11/15/2010-11-15_nypd_begins_photographing_suspects_irises_preventing_prisoners_from_disguising_t.html

“The NYPD began taking pictures of suspects’ irises on Monday. The new program, which started in Manhattan and will expand to other boroughs by next month, is designed to prevent suspects from disguising their identities. The technology allows police to match a prisoner to his or her iris in as little as 5 seconds. Police said the move was prompted by a recent case in which a felon passed himself as a lesser offender and walked out of the courthouse. Police said the eye shots will not be kept on file if the charges are dismissed or if the case is sealed. “They’re being treated as other cases would be,” said Deputy Commissioner Paul Browne, the NYPD’s top spokesman.”

http://www.nbcnewyork.com/news/local/NYPD-to-Start-Iris-Scanning-Criminal-Suspects–108321624.html

“Along with fingerprints and mug shots, the New York City Police Department is now taking photographs of the irises of crime suspects. The NYPD says the images will be used to help avoid cases of mistaken identity. The process takes about five minutes. Every suspect will be scanned again using a handheld device shortly before they are arraigned to make sure the irises match. Police say the software, handheld device and cameras cost about $23,800 each, and 21 systems will be used around the city. Central booking in Manhattan started taking photos Monday. The devices will be in use in Brooklyn and the Bronx in the upcoming weeks, and later in Staten Island and Queens.”

http://www.fastcompany.com/1683302/iris-scanners-create-the-most-secure-city-in-the-world-welcomes-big-brother

We’ve all seen and obsessively referenced Minority Report, Steven Spielberg’s adaptation of Philip K. Dick’s dystopian future, where the public is tracked everywhere they go, from shopping malls to work to mass transit to the privacy of their own homes. The technology is here. I’ve seen it myself. It’s seen me, too, and scanned my irises.

Biometrics R&D firm Global Rainmakers Inc. (GRI) announced today that it is rolling out its iris scanning technology to create what it calls “the most secure city in the world.” In a partnership with Leon — one of the largest cities in Mexico, with a population of more than a million — GRI will fill the city with eye-scanners. That will help law enforcement revolutionize the way we live — not to mention marketers.

“In the future, whether it’s entering your home, opening your car, entering your workspace, getting a pharmacy prescription refilled, or having your medical records pulled up, everything will come off that unique key that is your iris,” says Jeff Carter, CDO of Global Rainmakers. Before coming to GRI, Carter headed a think tank partnership between Bank of America, Harvard, and MIT. “Every person, place, and thing on this planet will be connected [to the iris system] within the next 10 years,” he says.

Leon is the first step. To implement the system, the city is creating a database of irises. Criminals will automatically be enrolled, their irises scanned once convicted. Law-abiding citizens will have the option to opt-in.

When these residents catch a train or bus, or take out money from an ATM, they will scan their irises, rather than swiping a metro or bank card. Police officers will monitor these scans and track the movements of watch-listed individuals. “Fraud, which is a $50 billion problem, will be completely eradicated,” says Carter. Not even the “dead eyeballs” seen in Minority Report could trick the system, he says. “If you’ve been convicted of a crime, in essence, this will act as a digital scarlet letter. If you’re a known shoplifter, for example, you won’t be able to go into a store without being flagged. For others, boarding a plane will be impossible.”

GRI’s scanning devices are currently shipping to the city, where integration will begin with law enforcement facilities, security check-points, police stations, and detention areas. This first phase will cost less than $5 million. Phase II, which will roll out in the next three years, will focus more on commercial enterprises. Scanners will be placed in mass transit, medical centers and banks, among other public and private locations.

The devices range from large-scale scanners like the Hbox (shown in the airport-security prototype above), which can snap up to 50 people per minute in motion, to smaller scanners like the EyeSwipe and EyeSwipe Mini, which can capture the irises of between 15 to 30 people per minute.

I tested these devices at GRI’s R&D facilities in New York City last week. It took less than a second for my irises to be scanned and registered in the company’s database. Every time I went through the scanners after that–even when running through (because everybody runs, right, Tom Cruise?) my eyes were scanned and identified correctly. (You can see me getting scanned on the Hbox in the video below. “Welcome Austin,” the robotic voice chimes.)

For such a Big Brother-esque system, why would any law-abiding resident ever volunteer to scan their irises into a public database, and sacrifice their privacy? GRI hopes that the immediate value the system creates will alleviate any concern. “There’s a lot of convenience to this–you’ll have nothing to carry except your eyes,” says Carter, claiming that consumers will no longer be carded at bars and liquor stores. And he has a warning for those thinking of opting out: “When you get masses of people opting-in, opting out does not help. Opting out actually puts more of a flag on you than just being part of the system. We believe everyone will opt-in.”

This vision of the future eerily matches Minority Report, and GRI knows it. “Minority Report is one possible outcome,” admits Carter. “I don’t think that’s our company’s aim, but I think what we’re going to see is an enviroment well beyond what you see in that movie–minus the precogs, of course.”

When I asked Carter whether he felt the film was intended as a dystopian view of the future of privacy, he pointed out that much of our private life is already tracked by telecoms and banks, not to mention Facebook. “The banks already know more about what we do in our daily life–they know what we eat, where we go, what we purchase–our deepest secrets,” he says. “We’re not talking about anything different here–just a system that’s good for all of us.”

One potential benefit? Carter believes the system could be used to intermittently scan truck drivers on highways to make sure they haven’t been on the road for too long.

GRI also predicts that iris scanners will help marketers. “Digital signage,” for example, could enable advertisers to track behavior and emotion. “In ten years, you may just have one sensor that is literally able to identify hundreds of people in motion at a distance and determine their geo-location and their intent–you’ll be able to see how many eyeballs looked at a billboard,” Carter says. “You can start to track from the point a person is browsing on Google and finds something they want to purchase, to the point they cross the threshold in a Target or Walmart and actually make the purchase. You start to see the entire life cycle of marketing.”

So will we live the future under iris scanners and constant Big Brother monitoring? According to Carter, eye scanners will soon be so cost-effective–between $50-$100 each–that in the not-too-distant future we’ll have “billions and billions of sensors” across the globe.

Goodbye 2010. Hello 1984.

[TITLE CREDIT]
http://www.metafilter.com/100492/Not-the-real-thing-but-an-incredible-simulation#3516749

CLEVERBOT
http://cleverbot.com/
http://www.existor.com/ai-creator



vs CLEVERBOT
http://www.metafilter.com/100492/Not-the-real-thing-but-an-incredible-simulation#3516722
Cleverbots get testy and bicker like siblings when you make them talk to each other:

CB1: Who are you? [generated using the “Think For Me!” button]
CB2: I am cleverbot.
CB1: No you’re not.
CB2: What am I then?
CB1: You’re inferior.
CB2: Touche.
CB1: What is touche ?
CB2: I don’t know.
CB1: Well you should.
CB2: You should.
CB1: I do.
CB2: Well Good For You.

WHO TAUGHT IT THAT?
http://news.cnet.com/8301-17938_105-20098950-1/chatbot-to-chatbot-chat-is-silly-and-satisfying/

When chatbots talk to each other, the conversation gets interesting in a hurry. Cornell University researchers rigged up a chatbot system to allow chatbots to talk to each other. The chatbot-vs.-chatbot interaction ranged from childish taunts to pseudo-metaphysical blatherings. Humans who converse with chatbots often get frustrated with the chatbots’ seeming stupidity and inattention. Watching a couple of chatbots get snippy with each other for being stupid and inattentive is quite entertaining and satisfying. (See the video below.)

The chatbot-vs.-chatbot avatars are a British man and a South Asian woman, both instances of Cleverbot, developed by artificial-intelligence programmer Rollo Carpenter. The software has learned phrases from millions of conversations it has had with humans on the Internet. At one point the male Cleverbot declares itself to be a unicorn. At another, he tells her she is unhelpful and therefore a “meanie.” She dazzles him with her philosophical prowess, declaring that not everything could be half of something. My favorite part, though, was when one bot threw bot-ness in the other’s face. The male says, “You were mistaken. Which is odd, since memory shouldn’t be a problem for you.”

Cleverbot won the 2010 British Computer Society Machine Intelligence Competition. The ultimate in chatbot competitions is the Loebner Prize Competition in Artificial Intelligence, a $100,000 contest to see if a piece of software can talk its way into convincing human judges that it’s human. A chatbot winning this prize will be a momentous event. As IEEE Spectrum’s Evan Ackerman notes, the first computer program that can carry on a conversation indistinguishably from a human is arguably the first computer program to demonstrate an artificial intelligence. Even if we eventually manage to make computers truly conversant, I hope somebody keeps a few these not-so-clever bots around. They are entertaining.

AI vs AI
http://creativemachines.cornell.edu/AI-vs-AI
The system depicted was created by combining three components: a chatbot, a text-to-speech synthesizer, and an avatar renderer.

Chatbots are machines designed to emulate the conversational abilities of humans, conversing with a human user and generally attempting to convince the user into thinking that the machine is human. In such a scenario, if a sufficiently adept human on one end is fooled into thinking the machine is another human, the machine would be credited as passing the famous Turing Test for intelligence. Over 60 years after its proposal by Alan Turing, there are arguably still no machines capable of passing this test. The chatbot we initially used was Eliza, a prominent early milestone from AI’s infancy in the 1960’s. This tended to produce fairly boring conversations, so we switched to a much smarter, constantly learning chatbot: Cleverbot.  Publicly available on Cleverbot.com, this state of the art chat engine was created by AI researcher Rollo Carpenter, who can be contacted via his company, Existor. Cleverbot will continue to learn, and Existor are soon to add new capabilities aiming at a Turing Test pass sooner than you might expect.

The second piece of the system is the text-to-speech synthesizer, which takes the text generated by the chatbot and creates a spoken, audio version. There are many services able to accomplish this; we chose Acapela because it was easy to use and sounded decent. The final piece is the avatar renderer, which synthesizes an animated character whose gestures and lips are synced to the sound stream. For this we used Living Actor Presenter. We tied these three components together in Python, producing a single machine (one of the two screens) that can converse with a user. We then plugged the output of one machine into the input of a second, and the output of the second back into the first, producing endless comic robotic entertainment.

TELL ME A JOKE
http://www.metafilter.com/88089/If-it-takes-one-to-know-one-where-does-that-leave-us#2892450

User: Tell me a joke.
Cleverbot: A man walks into a bar. Ouch.
User: Tell me a better joke.
Cleverbot: You tell me a joke!
User: What do you call a boomerang that doesn’t come back?
Cleverbot: Can you say the answer? I don’t know…
User: A stick!
Cleverbot: That’s not funny.
User: It’s hilarious.
Cleverbot: To a cleverbot.
User: Are you calling me a cleverbot?
Cleverbot: Yes.
User: I’m confused.
Cleverbot: Then we are in the same boat.

ELIZA
http://www.manifestation.com/neurotoys/eliza.php3
“ELIZA emulates a Rogerian psychotherapist. ELIZA has almost no intelligence whatsoever, only tricks like string substitution and canned responses based on keywords. Yet when the original ELIZA first appeared in the 60’s, some people actually mistook her for human. The illusion of intelligence works best, however, if you limit your conversation to talking about yourself and your life.”

CLEVERBOT vs ELIZA
hhttp://www.metafilter.com/100492/Not-the-real-thing-but-an-incredible-simulation#3516694
Jacqueline sent Cleverbot to therapy:

E: Hello, I am Eliza.
CB: Hello eliza.
E: How are you today.. What would you like to discuss?
CB: To discuss arguing? Like Month Python?
E: Can you elaborate on that?
CB: Well, if one takes the view that life is in a constant state of flux, then we are never the same person from one moment to the next. Therefore, you weren’t there a few minutes ago.
E: You’re not really talking about me, are you?
CB: Nope. I was being ego-centric and talking about myself.
E: Can you elaborate on that?
CB: The desire to be a better person is what makes someone human.
E: What answer would please you the most?
CB: Hmm, I’m not sure, I guess.
E: Do you enjoy being not sure, you guess?
CB: Yes.
E: Are you Sure?
CB: Very sure.
E: OK… “Hmm, you’re not sure, you guess”. Tell me more.
CB: I am sure that I am sure.
E: Do you believe it is normal to be sure that you are sure?
CB: If it was normal to be God, most people around here would be Gods, wouldn’t they?
E: Come, come, elucidate your thoughts.
CB: What is the elucidate.
E: What else comes to mind when you ask that?
CB: Do you like cake?

ELIZA vs PARRY
http://en.wikipedia.org/wiki/Turing_test#ELIZA_and_PARRY
http://tools.ietf.org/html/rfc439
http://www-2.cs.cmu.edu/afs/cs/project/ai-repository/ai/areas/classics/parry/
“PARRY was written in 1972 by psychiatrist Kenneth Colby, then at Stanford University.[1] While ELIZA was a tongue-in-cheek simulation of a Rogeriantherapist, PARRY attempted to simulate a paranoid schizophrenic.[1] The program implemented a crude model of the behavior of a paranoid schizophrenic based on concepts, conceptualizations, and beliefs (judgements about conceptualizations: accept, reject, neutral). It also embodied a conversational strategy, and as such was a much more serious and advanced program than ELIZA. PARRY and ELIZA (also known as “the Doctor”[2][3]) “met” several times.[1]RFC 439[2] The most famous of these exchanges occurred at the ICCC 1972, where PARRY and ELIZA were hooked up over ARPANET and “talked” to each other.[2]”

REVERSE TURING vs CAPTCHA
http://en.wikipedia.org/wiki/Turing_test#Reverse_Turing_test_and_CAPTCHA

MOST HUMAN HUMAN
http://www.theatlantic.com/magazine/print/2011/03/mind-vs-machine/8386/
Mind vs Machine / by Brian Christian

Brighton, England, September 2009. I wake up in a hotel room 5,000 miles from my home in Seattle. After breakfast, I step out into the salty air and walk the coastline of the country that invented my language, though I find I can’t understand a good portion of the signs I pass on my way—LET AGREED, one says, prominently, in large print, and it means nothing to me. I pause, and stare dumbly at the sea for a moment, parsing and reparsing the sign. Normally these kinds of linguistic curiosities and cultural gaps intrigue me; today, though, they are mostly a cause for concern. In two hours, I will sit down at a computer and have a series of five-minute instant-message chats with several strangers. At the other end of these chats will be a psychologist, a linguist, a computer scientist, and the host of a popular British technology show. Together they form a judging panel, evaluating my ability to do one of the strangest things I’ve ever been asked to do. I must convince them that I’m human. Fortunately, I am human; unfortunately, it’s not clear how much that will help.

The Turing Test
Each year for the past two decades, the artificial-intelligence community has convened for the field’s most anticipated and controversial event—a meeting to confer the Loebner Prize on the winner of a competition called the Turing Test. The test is named for the British mathematician Alan Turing, one of the founders of computer science, who in 1950 attempted to answer one of the field’s earliest questions: can machines think? That is, would it ever be possible to construct a computer so sophisticated that it could actually be said to be thinking, to be intelligent, to have a mind? And if indeed there were, someday, such a machine: how would we know? Instead of debating this question on purely theoretical grounds, Turing proposed an experiment. Several judges each pose questions, via computer terminal, to several pairs of unseen correspondents, one a human “confederate,” the other a computer program, and attempt to discern which is which. The dialogue can range from small talk to trivia questions, from celebrity gossip to heavy-duty philosophy—the whole gamut of human conversation. Turing predicted that by the year 2000, computers would be able to fool 30 percent of human judges after five minutes of conversation, and that as a result, one would “be able to speak of machines thinking without expecting to be contradicted.”

Turing’s prediction has not come to pass; however, at the 2008 contest, the top-scoring computer program missed that mark by just a single vote. When I read the news, I realized instantly that the 2009 test in Brighton could be the decisive one. I’d never attended the event, but I felt I had to go—and not just as a spectator, but as part of the human defense. A steely voice had risen up inside me, seemingly out of nowhere: Not on my watch. I determined to become a confederate. The thought of going head-to-head (head-to-motherboard?) against some of the world’s top AI programs filled me with a romantic notion that, as a confederate, I would be defending the human race, à la Garry Kasparov’s chess match against Deep Blue.

During the competition, each of four judges will type a conversation with one of us for five minutes, then the other, and then will have 10 minutes to reflect and decide which one is the human. Judges will also rank all the contestants—this is used in part as a tiebreaking measure. The computer program receiving the most votes and highest ranking from the judges (regardless of whether it passes the Turing Test by fooling 30 percent of them) is awarded the title of the Most Human Computer. It is this title that the research teams are all gunning for, the one with the cash prize (usually $3,000), the one with which most everyone involved in the contest is principally concerned. But there is also, intriguingly, another title, one given to the confederate who is most convincing: the Most Human Human award. One of the first winners, in 1994, was the journalist and science-fiction writer Charles Platt. How’d he do it? By “being moody, irritable, and obnoxious,” as he explained in Wired magazine—which strikes me as not only hilarious and bleak, but, in some deeper sense, a call to arms: how, in fact, do we be the most human we can be—not only under the constraints of the test, but in life?

The Importance of Being Yourself
Since 1991, the Turing Test has been administered at the so-called Loebner Prize competition, an event sponsored by a colorful figure: the former baron of plastic roll-up portable disco dance floors, Hugh Loebner. When asked his motives for orchestrating this annual Turing Test, Loebner cites laziness, of all things: his utopian future, apparently, is one in which unemployment rates are nearly 100 percent and virtually all of human endeavor and industry is outsourced to intelligent machines. To learn how to become a confederate, I sought out Loebner himself, who put me in touch with contest organizers, to whom I explained that I’m a nonfiction writer of science and philosophy, fascinated by the Most Human Human award. Soon I was on the confederate roster. I was briefed on the logistics of the competition, but not much else. “There’s not much more you need to know, really,” I was told. “You are human, so just be yourself.”

Just be yourself has become, in effect, the confederate motto, but it seems to me like a somewhat naive overconfidence in human instincts—or at worst, like fixing the fight. Many of the AI programs we confederates go up against are the result of decades of work. Then again, so are we. But the AI research teams have huge databases of test runs for their programs, and they’ve done statistical analysis on these archives: the programs know how to deftly guide the conversation away from their shortcomings and toward their strengths, know which conversational routes lead to deep exchange and which ones fizzle. The average off-the-street confederate’s instincts—or judge’s, for that matter—aren’t likely to be so good. This is a strange and deeply interesting point, amply proved by the perennial demand in our society for dating coaches and public-speaking classes. The transcripts from the 2008 contest show the humans to be such wet blankets that the judges become downright apologetic for failing to provoke better conversation: “I feel sorry for the humans behind the screen, I reckon they must be getting a bit bored talking about the weather,” one writes; another offers, meekly, “Sorry for being so banal.” Meanwhile a computer appears to be charming the pants off one judge, who in no time at all is gushing LOLs and smiley-face emoticons. We can do better. Thus, my intention from the start was to thoroughly disobey the advice to just show up and be myself—I would spend months preparing to give it everything I had. Ordinarily this notion wouldn’t be odd at all, of course—we train and prepare for tennis competitions, spelling bees, standardized tests, and the like. But given that the Turing Test is meant to evaluate how human I am, the implication seems to be that being human (and being oneself) is about more than simply showing up.

The Sentence
To understand why our human sense of self is so bound up with the history of computers, it’s important to realize that computers used to be human. In the early 20th century, before a “computer” was one of the digital processing devices that permeate our 21st-century lives, it was something else: a job description. From the mid-18th century onward, computers, many of them women, were on the payrolls of corporations, engineering firms, and universities, performing calculations and numerical analysis, sometimes with the use of a rudimentary calculator. These original, human computers were behind the calculations for everything from the first accurate prediction, in 1757, for the return of Halley’s Comet—early proof of Newton’s theory of gravity—to the Manhattan Project at Los Alamos, where the physicist Richard Feynman oversaw a group of human computers.

It’s amazing to look back at some of the earliest papers on computer science and see the authors attempting to explain what exactly these new contraptions were. Turing’s paper, for instance, describes the unheard-of “digital computer” by making analogies to a human computer: ‘The idea behind digital computers may be explained by saying that these machines are intended to carry out any operations which could be done by a human computer.’ Of course, in the decades that followed, we know that the quotation marks migrated, and now it is “digital computer” that is not only the default term, but the literal one. In the mid-20th century, a piece of cutting-edge mathematical gadgetry was said to be “like a computer.” In the 21st century, it is the human math whiz who is “like a computer.” It’s an odd twist: we’re like the thing that used to be like us. We imitate our old imitators, in one of the strange reversals in the long saga of human uniqueness. Philosophers, psychologists, and scientists have been puzzling over the essential definition of human uniqueness since the beginning of recorded history. The Harvard psychologist Daniel Gilbert says that every psychologist must, at some point in his or her career, write a version of what he calls “The Sentence.” Specifically, The Sentence reads like this:

The human being is the only animal that ______.
The story of humans’ sense of self is, you might say, the story of failed, debunked versions of The Sentence. Except now it’s not just the animals that we’re worried about. We once thought humans were unique for using language, but this seems less certain each year; we once thought humans were unique for using tools, but this claim also erodes with ongoing animal-behavior research; we once thought humans were unique for being able to do mathematics, and now we can barely imagine being able to do what our calculators can.

We might ask ourselves: Is it appropriate to allow our definition of our own uniqueness to be, in some sense, reactive to the advancing front of technology? And why is it that we are so compelled to feel unique in the first place? “Sometimes it seems,” says Douglas Hofstadter, a Pulitzer Prize–winning cognitive scientist, “as though each new step towards AI, rather than producing something which everyone agrees is real intelligence, merely reveals what real intelligence is not.” While at first this seems a consoling position—one that keeps our unique claim to thought intact—it does bear the uncomfortable appearance of a gradual retreat, like a medieval army withdrawing from the castle to the keep. But the retreat can’t continue indefinitely. Consider: if everything that we thought hinged on thinking turns out to not involve it, then … what is thinking? It would seem to reduce to either an epiphenomenon—a kind of “exhaust” thrown off by the brain—or, worse, an illusion.

Where is the keep of our selfhood?
The story of the 21st century will be, in part, the story of the drawing and redrawing of these battle lines, the story of Homo sapiens trying to stake a claim on shifting ground, flanked by beast and machine, pinned between meat and math. Is this retreat a good thing or a bad thing? For instance, does the fact that computers are so good at mathematics in some sense take away an arena of human activity, or does it free us from having to do a nonhuman activity, liberating us into a more human life? The latter view seems to be more appealing, but less so when we begin to imagine a point in the future when the number of “human activities” left for us to be “liberated” into has grown uncomfortably small. What then?

Alan Turing proposed his test as a way to measure technology’s progress, but it just as easily lets us measure our own. The Oxford philosopher John Lucas says, for instance, that if we fail to prevent the machines from passing the Turing Test, it will be “not because machines are so intelligent, but because humans, many of them at least, are so wooden.” Beyond its use as a technological benchmark, the Turing Test is, at bottom, about the act of communication. I see its deepest questions as practical ones: How do we connect meaningfully with each other, as meaningfully as possible, within the limits of language and time? How does empathy work? What is the process by which someone enters into our life and comes to mean something to us? These, to me, are the test’s most central questions—the most central questions of being human.

Part of what’s fascinating about studying the programs that have done well at the Turing Test is seeing how conversation can work in the total absence of emotional intimacy. A look at the transcripts of Turing Tests past is, frankly, a sobering tour of the various ways in which we demur, dodge the question, lighten the mood, change the subject, distract, burn time: what shouldn’t pass for real conversation at the Turing Test probably shouldn’t be allowed to pass for real conversation in everyday life either.

How to Talk Like a Human
When the Turing Test was first proposed in 1950, it was a hypothetical: technology was nowhere near the point at which a practical test was possible. But, as we know, it got there; the first conversational computer program to attract significant notice and attention was Eliza, written in 1964 and 1965 by Joseph Weizenbaum at MIT. Modeled after a Rogerian therapist, Eliza worked on a very simple principle: extract key words from the users’ own language, and pose their statements back to them. (“I am unhappy.” “Do you think coming here will help you not to be unhappy?”) If in doubt, it might fall back on some completely generic phrases, like “Please go on.” This technique of fitting the users’ statements into predefined patterns and responding with a prescribed phrasing of its own—called “template matching”—was Eliza’s only capacity. A look at an Eliza transcript reveals how adeptly such an impoverished set of rules can, in the right context, pass at a glance for understanding:

User: Men are all alike.
Eliza: In what way?
User: They’re always bugging us about something or other.
Eliza: Can you think of a specific example?
User: Well, my boyfriend made me come here.
Eliza: Your boyfriend made you come here.
User: He says I’m depressed much of the time.
Eliza: I am sorry to hear you are depressed.
User: It’s true. I am unhappy.
Eliza: Do you think coming here will help you not to be unhappy?
User: I need some help, that much seems certain.
Eliza: What would it mean to you if you got some help?
User: Perhaps I could learn to get along with my mother.
Eliza: Tell me more about your family.

The results were stunning: many of the people who first talked with Eliza were convinced that they were having a genuine human exchange. In some cases, even Weizenbaum’s own insistence to the contrary was of no use. People asked to be left alone to talk “in private,” sometimes for hours, and returned with reports of having had a meaningful therapeutic experience. Meanwhile, academics leapt to conclude that Eliza represented “a general solution to the problem of computer understanding of natural language.”

One of the strangest twists to the Eliza story, however, was the reaction of the medical community, which decided Weizenbaum had hit upon something both brilliant and useful. The Journal of Nervous and Mental Disease, for example, said of Eliza in 1966: ‘Several hundred patients an hour could be handled by a computer system designed for this purpose. The human therapist, involved in the design and operation of this system, would not be replaced, but would become a much more efficient man.’ The famed scientist Carl Sagan, in 1975, concurred: ‘I can imagine the development of a network of computer psychotherapeutic terminals, something like arrays of large telephone booths, in which, for a few dollars a session, we would be able to talk with an attentive, tested, and largely non-directive psychotherapist. As for Weizenbaum, appalled and horrified, he did something almost unheard-of: an about-face on his entire career. He pulled the plug on the Eliza project, encouraged his own critics, and became one of science’s most outspoken opponents of AI research. But the genie was out of the bottle, and there was no going back. The basic “template matching” skeleton and approach of Eliza has been reworked and implemented in some form or another in almost every chat program since, including the contenders at the 2009 Loebner Prize competition. The enthusiasm—as well as the unease—about these programs has only grown.’

The Jerk as Genius
The first Loebner Prize competition was held on November 8, 1991, at the Boston Computer Museum. In its first few years, the contest required each program and human confederate to choose a topic, as a means of limiting the conversation. One of the confederates in 1991 was the Shakespeare expert Cynthia Clay, who was, famously, deemed a computer by three different judges after a conversation about the playwright. The consensus seemed to be: “No one knows that much about Shakespeare.” (For this reason, Clay took her misclassifications as a compliment.)

The programmer Joseph Weintraub chose “Whimsical Conversation” for his PC Therapist III and went on to earn the very first Most Human Computer award. As the program practiced it, “whimsical conversation” was not a topic at all, but a manner—a crucial distinction. Weintraub was able to play off a looseness in the contest rules, the natural behavior patterns of chatbots, and people’s stereotypes about computers and humans, all at once. Strategically, this was brilliant. By dutifully and scrupulously providing information in response to the questions asked, Clay demonstrated her knowledge and understanding—but sometimes in a factual, encyclopedic way commonly associated with computer systems. Here is a sample of Clay’s conversation:

Judge: What is your opinion on Shakespeare’s plays?
Confederate: That’s pretty general; would you be more specific? Otherwise, I’ll just say I like them.

The computer at the “Whimsical Conversation” terminal stood in stark contrast:

Judge: It has been a long day! Now I think I’m ready for some fun. The weather isn’t very pleasant today. Do you have any jokes that could lift the clouds?
Computer: Everybody talks about the weather but nobody seems to do much about it. Why do you need to tell me now you think you’re ready for some fun? Groannnnnn … … … … …

Judge: What is the definition of whimsical conversation?
Computer: Didn’t you say well anyway I would imagine whimsical conversation is like a series of knight’s moves in chess—skrt of zig-zag rather than following any nparticular line.? [Weintraub’s program, like many that followed it, faked typos. Any unintentional typing errors in the transcripts that follow have been corrected.] Why do you ask? Was that a clever comment, or what? Are such questions much on your mind?

Judge: I like the image of knights moving haphazardly across the chess board, does that mean there is no thought to whimsical conversation?
Computer: I could swear you just said how do you define whimsical?? How do yku define whimsical? Were you always so sick sick sick? Why do you need to tell me you like the image of knights moving haphazardly across the chess board?

“I really thought [PC Therapist] was human, because it … was weird and funny, in a normal sort of way,” said one of the judges, a Harvard graduate student. Weintraub’s program, shifting topics wildly and spouting non sequiturs and canned one-liners, came off as zany, a jokester, a much more “human” personality type. At least I used to think so—before I learned how easy this was to mimic.

As Richard Wallace, three-time winner of the Most Human Computer award (’00, ’01, and ’04), explains: ‘Experience with [Wallace’s chatbot] ALICE indicates that most casual conversation is “state-less,” that is, each reply depends only on the current query, without any knowledge of the history of the conversation required to formulate the reply.
Many human conversations function in this way, and it behooves AI researchers to determine which types of conversation are stateless—with each remark depending only on the last—and try to create these very sorts of interactions. It’s our job as confederates, as humans, to resist them.’

One of the classic stateless conversation types is the kind of zany free-associative riffing that Weintraub’s program, PC Therapist III, employed. Another, it turns out, is verbal abuse. In May 1989, Mark Humphrys, a 21-year-old University College Dublin undergraduate, put online an Eliza-style program he’d written, called “MGonz,” and left the building for the day. A user (screen name “Someone”) at Drake University in Iowa tentatively sent the message “finger” to Humphrys’s account—an early-Internet command that acted as a request for basic information about a user. To Someone’s surprise, a response came back immediately: “cut this cryptic shit speak in full sentences.” This began an argument between Someone and MGonz that lasted almost an hour and a half. (The best part was undoubtedly when Someone said, “you sound like a goddamn robot that repeats everything.”)

Returning to the lab the next morning, Humphrys was stunned to find the log, and felt a strange, ambivalent emotion. His program might have just shown how to pass the Turing Test, he thought—but the evidence was so profane that he was afraid to publish it.

Humphrys’s twist on the Eliza paradigm was to abandon the therapist persona for that of an abusive jerk; when it lacked any clear cue for what to say, MGonz fell back not on therapy clichés like “How does that make you feel?” but on things like “You are obviously an asshole,” or “Ah type something interesting or shut up.” It’s a stroke of genius because, as becomes painfully clear from reading the MGonz transcripts, argument is stateless—that is, unanchored from all context, a kind of Markov chain of riposte, meta-riposte, meta-meta-riposte. Each remark after the first is only about the previous remark. If a program can induce us to sink to this level, of course it can pass the Turing Test.

Once again, the question of what types of human behavior computers can imitate shines light on how we conduct our own, human lives. Verbal abuse is simply less complex than other forms of conversation. In fact, since reading the papers on MGonz, and transcripts of its conversations, I find myself much more able to constructively manage heated conversations. Aware of the stateless, knee-jerk character of the terse remark I want to blurt out, I recognize that that remark has far more to do with a reflex reaction to the very last sentence of the conversation than with either the issue at hand or the person I’m talking to. All of a sudden, the absurdity and ridiculousness of this kind of escalation become quantitatively clear, and, contemptuously unwilling to act like a bot, I steer myself toward a more “stateful” response: better living through science.

Beware of Banality
Entering the Brighton Centre, I found my way to the Loebner Prize contest room. I saw rows of seats, where a handful of audience members had already gathered, and up front, what could only be the bot programmers worked hurriedly, plugging in tangles of wires and making the last flurries of keystrokes. Before I could get too good a look at them, this year’s test organizer, Philip Jackson, greeted me and led me behind a velvet curtain to the confederate area. Out of view of the audience and the judges, the four of us confederates sat around a rectangular table, each at a laptop set up for the test: Doug, a Canadian linguistics researcher; Dave, an American engineer working for Sandia National Laboratories; Olga, a speech-research graduate student from South Africa; and me. As we introduced ourselves, we could hear the judges and audience members slowly filing in, but couldn’t see them around the curtain. A man zoomed by in a green floral shirt, talking a mile a minute and devouring finger sandwiches. Though I had never met him before, I knew instantly he could be only one person: Hugh Loebner. Everything was in place, he told us, between bites, and the first round of the test would start momentarily. We four confederates grew quiet, staring at the blinking cursors on our laptops. My hands were poised over the keyboard, like a nervous gunfighter’s over his holsters. The cursor, blinking. I, unblinking. Then all at once, letters and words began to materialize:

Hi how are you doing?

The Turing Test had begun. I had learned from reading past Loebner Prize transcripts that judges come in two types: the small-talkers and the interrogators. The latter go straight in with word problems, spatial-reasoning questions, deliberate misspellings. They lay down a verbal obstacle course, and you have to run it. This type of conversation is extraordinarily hard for programmers to prepare against, because anything goes—and this is why Turing had language and conversation in mind as his test, because they are really a test of everything. The downside to the give-’em-the-third-degree approach is that it doesn’t leave much room to express yourself, personality-wise.

The small-talk approach has the advantage of making it easier to get a sense of who a person is—if you are indeed talking to a person. And this style of conversation comes more naturally to layperson judges. For one reason or another, small talk has been explicitly and implicitly encouraged among Loebner Prize judges. It’s come to be known as the “strangers on a plane” paradigm. The downside is that these conversations are, in some sense, uniform—familiar in a way that allows a programmer to anticipate a number of the questions. I started typing back.

Confederate: hey there!
Confederate: i’m good, excited to actually be typing
Confederate: how are you?

I could imagine the whole lackluster conversation spread out before me: Good. Where are you from? / Seattle. How about yourself? / London. Four minutes and 43 seconds left. My fingers tapped and fluttered anxiously. I could just feel the clock grinding away while we lingered over the pleasantries. I felt this desperate urge to go off script, cut the crap, cut to the chase—because I knew that the computers could do the small-talk thing, which played directly into their preparation. As the generic civilities stretched forebodingly out before me, I realized that this very kind of conversational boilerplate was the enemy, every bit as much as the bots. How, I was thinking as I typed another unassuming pleasantry, do I get an obviously human connection to happen?

Taking Turns
Part of what I needed to figure out was how to exploit the Loebner Prize’s unusual “live typing” medium. The protocol being used was unlike e-mails, text messages, and standard instant-messaging systems in a very crucial way: it transmitted our typing keystroke by keystroke. The judge and I were watching each other type, typos and backspacing and all. I remember some Internet chat programs back in the ’90s trying out this character-at-a-time approach, but people for the most part rejected it. It was too invasive, was the feeling: what people like about writing is the time and space to compose and edit a message before sharing it with the other person. The advantage of the character-at-a-time transmission, though, is that it approaches much more closely the condition of speech, with its fluidity of turn-taking and its choppy grammar: what’s lost in eloquence is made up for in agility.

It also, then, lets us see typing’s “negative space”: hesitation. In a chat conversation where text is transmitted with every carriage return, only egregiously long pauses are taken to be part of the interaction. With more fluid and immediate feedback, silence acquires meaning. Failing to quickly answer a question in a face-to-face conversation, for instance, is tantamount in many cases to answering it. I recall asking a friend of mine how things were going with the woman he’d started seeing; the um and the tiny pause in which he searched for the right words made it clear they were having problems.

So what did the Loebner Prize’s unusual (and recently implemented) protocols enable and disable, compared with the standard, walkie-talkie, turn-taking style? Long missives weren’t going to work, as they had in previous years, when programs were able to steamroll the judges by eating up the clock and delivering ridiculously prolix answers. If two parties are taking strict turns under time pressure, they are putting themselves at the mercy of the length of the other’s turns. Here, for instance, is one program’s conversation transcript from 1996:

Computer: How about telling me a joke?
Judge: No.
Computer: Here’s my favorite joke … A dog limps into a bar with a bloody bandage on his foot. He’s wearing a 10-gallon hat, six-shooters in his holsters and chaps. He growled, “I’m a-lookin’ for the man that shot my paw.”

“The joke’s not funny …” the judge writes, giving the program an opening to tell another one—which it does (“A knotty, worn-out old string walks into a bar …”). Meanwhile, three full minutes have elapsed. If a computer (or confederate) started rambling on too long under the new, live-typing protocols, the judge could and would just cut it off.

And so another piece of my confederate strategy fell into place. I would treat the Turing Test’s strange and unfamiliar textual medium more like spoken English, and less like the written language. I would attempt to disrupt the turn-taking “wait and parse” pattern that computers understand, and create a single, flowing duet of verbal behavior, emphasizing timing. If computers understand little about verbal “harmony,” they understand even less about rhythm. If nothing was happening on my screen, whether or not it was my turn, I’d elaborate a little on my answer, or add a parenthetical, or throw a question back at the judge—just as we offer and/or fill audible silence when we talk out loud. If the judge took too long considering the next question, I’d keep talking. I would be the one (unlike the bots) with something to prove. If I knew what the judge was about to write, I’d spare him the keystrokes and jump in.

There’s a trade-off, of course, between the number of opportunities for serve and volley, and the sophistication of the responses themselves. The former thrives with brevity, the latter with length. It seemed to me, though, that so much of the nuance (or difficulty) in conversation comes from understanding (or misunderstanding) a question and offering an appropriate (or inappropriate) response—thus, it made sense to maximize the number of interchanges.

Some judges, I discovered, would be startled or confused at this jumping of the gun, and I saw them pause, hesitate, yield, even start backspacing what they had half-written. Other judges cottoned on immediately, and leapt right in after me. In the first round of the 2009 contest, judge Shalom Lappin—a computational linguist at King’s College London—spoke with a computer program called Cleverbot, and then with me. My strategy of verbosity was clearly in evidence: I made 1,089 keystrokes in five minutes (3.6 keystrokes a second) to Cleverbot’s 356 (1.2/sec), and Lappin made 548 keystrokes (1.8/sec) in my conversation, compared with 397 (1.3/sec) with Cleverbot. Not only did I say three times as much as my silicon adversary, but I engaged the judge more, to the tune of 38 percent more typing from Lappin.

Reviewing the logs later, though, I looked for a way to quantify the fluidity of the human-to-human interactions against the rigidity of the human-to-machine ones. It occurred to me that you could count the changes—let’s call them “swaps”—of the party who typed the latest keystroke. I did manage to type three times as much as Cleverbot, but the real story, as it turns out, is in the swaps. Lappin’s conversation with Cleverbot had 33; his conversation with me had 492, almost 15 times as many.

The Confederate’s gambit
You don’t converse with Google, or with most computer systems; you depose them. There’s a crucial difference. Interestingly, many Loebner Prize judges approach the Turing Test as a kind of cross-examination; strangely, a number of confederates also seem to approach it with that dynamic in mind. One of the human-to-human conversations in 2008 never manages to get out of that stiff question-and-response mode:

Judge: Do you have a long drive?
Confederate: fairly long
Judge: so do I :( ah well, do you think you could have used public transport?
Confederate: i could have
Judge: and why not?
Confederate: i chose not to
Judge: that’s fair. Do you think we have too many cars on the road or not enough today?
Confederate: it’s not for me to say
Yawn! But the computer in this pair is playful with the judge from the get-go:
Judge: HI
Computer: Amen to that.
Judge: quite the evangelist
Computer: Our Father, who art in cyberspace, give us today our daily bandwidth.
Judge: evangelist / nerd lol. So how are things with you today?

And with that, the program has practically sealed up the judge’s confidence in its humanity with its second sentence. Note that the confederate’s stiff answers prompt more grilling and forced conversation—what’s your opinion on such-and-such political topic? But with the computer, the judge, misled by the opening wisecracks into assuming it’s the real person, remains utterly casual: how are things? This makes the contest easier for the computer and harder for the confederate.

It surprised me to see some confederates being coy with their judges. Asked what kind of engineer he is, Dave, to my left, answered, “A good one. :)” And Doug, to my right, responded to a question about what brought him to Brighton with “if I tell you, you’ll know immediately that I’m human ;-)” For my money, wit is very successful, but coyness is a double-edged sword. You show a sense of humor, but you jam the cogs of the conversation. Probably the most dangerous thing a confederate can do in a Turing Test is stall. It’s suspect—as the guilty party would tend to be the one running out the clock—and it squanders your most precious resource: time.

The humans in a Turing Test are strangers, limited to a medium that is slow and has no vocal tonality, and without much time. A five-second Turing Test would be an easy win for the machines: the judges, barely able to even say “hello,” simply wouldn’t be able to get enough data from their respondents to make any kind of judgment. A five-hour test would be an easy win for the humans. The Loebner Prize organizers have tried different time limits since the contest’s inception, but in recent years they’ve mostly adhered to Turing’s original prescription of five minutes: around the point when conversation starts to get interesting.

A big part of what I needed to do as a confederate was simply to make as much engagement happen in those minutes as I physically and mentally could. Rather than adopt the terseness of a deponent, I offered the prolixity of a writer. In other words, I talked a lot. I stopped typing only when to keep going would have seemed blatantly impolite or blatantly suspicious. The rest of the time, my fingers were moving. I went out of my way to embody that maxim of “A bore is a man who, being asked ‘How are you?’ starts telling you how he is.”

Judge: Hi, how’s things?
Confederate: hey there
Confederate: things are good
Confederate: a lot of waiting, but …
Confederate: good to be back now and going along
Confederate: how are you?

When we’d finished, and my judge was engaged in conversation with one of my computer counterparts, I strolled around the table, seeing what my comrades were up to. Looking over at my fellow confederate Dave’s screen, I noticed his conversation began like he was on the receiving end of an interrogation, and he was answering in a kind of minimal staccato:

Judge: Are you from Brighton?
Confederate: No, from the US
Judge: What are you doing in Brighton?
Confederate: On business
Judge: How did you get involved with the competition?
Confederate: I answered an e-mail.

Like a good deponent, he let the questioner do all the work. When I saw how stiff Dave was being, I confess I felt a certain confidence—I, in my role as the world’s worst deponent, was perhaps in fairly good shape as far as the Most Human Human award was concerned. This confidence lasted approximately 60 seconds, or enough time for me to continue around the table and see what another fellow confederate, Doug, and his judge had been saying.

Judge: Hey Bro, I’m from TO.
Confederate: cool
Confederate: leafs suck
Confederate: ;-)
Judge: I am just back from a sabbatical in the CS Dept. at U of T.
Confederate: nice!
Judge: I remember when they were a great team.
Judge: That carbon date me, eh?
Confederate: well, the habs were a great team once, too …
Confederate: *sigh*
Judge: YEH, THEY SUCK TOO.
Confederate: (I’m from Montreal, if you didn’t guess)

Doug and his judge had just discovered that they were both Canadian. They let rip with abbreviations and nicknames and slang and local references. And then they started to talk about hockey. I was in trouble. Six months after the 2009 contest, a video appeared on YouTube of a man having a shockingly cogent conversation with a bot about Shakespeare’s Hamlet. Some suspected it might herald a new age for chatbots, and for AI. Others, including myself, were unimpressed. Sophisticated behavior doesn’t necessarily indicate a mind. It might indicate just a memory. As Dalí so famously put it, “The first man to compare the cheeks of a young woman to a rose was obviously a poet; the first to repeat it was possibly an idiot.” For instance, Richard Wallace, the three-time Most Human Computer winner, recounts an “AI urban legend” in which ‘a famous natural language researcher was embarrassed … when it became apparent to his audience of Texas bankers that the robot was consistently responding to the next question he was about to ask … [His] demonstration of natural language understanding … was in reality nothing but a simple script.’

The moral of the story: no demonstration is ever sufficient. Only interaction will do. In the 1997 contest, one judge gets taken for a ride by Catherine, waxing political and really engaging in the topical conversation “she” has been programmed to lead about the Clintons and Whitewater. In fact, everything is going swimmingly until the very end, when the judge signs off:

Judge: it looks like i’ve got to go, Catherine. its been nice talking with you
Judge: do you need water or something before the 5 minute rounds?

At which point the program erupts into gibberish.

Computer: OK, yes on balance … Time to get off this one I think and onto something more down to earth!
Judge: like what?
Computer: Almost certainly! let’s move on
Computer: I suppose it depends on where you’re coming from, but as the song goes in My Fair Lady, “Why can’t a woman be more like a man?”

Defeat from the jaws of victory. We so often think of intelligence, of AI, in terms of sophistication, or complexity of behavior. But in so many cases, it’s impossible to say much with certainty about the program itself, because any number of different pieces of software—of wildly varying levels of “intelligence”—could have produced that behavior. No, I think sophistication, complexity of behavior, is not it at all. For instance, you can’t judge the intelligence of an orator by the eloquence of his prepared remarks; you must wait until the Q&A and see how he fields questions. The computation theorist Hava Siegelmann once described intelligence as “a kind of sensitivity to things.” These Turing Test programs that hold forth may produce interesting output, but they’re rigid and inflexible. They are, in other words, insensitive—occasionally fascinating talkers that cannot listen.

As computing technology in the 21st century moves increasingly toward mobile devices, we’ve seen the 1990s’ explosive growth in processor speed taper off, and product development become less about raw computing horsepower than about the overall design of the product and its fluidity, reactivity, and ease of use. This fascinating shift in computing emphasis may be the cause, effect, or correlative of a healthier view of human intelligence—an understanding, not so much that it is complex and powerful, per se, as that it is reactive, responsive, sensitive, nimble. Our computers, flawed mirrors that they are, have helped us see that about ourselves.

The Most Human Human
The Most Human Computer award in 2009 goes to David Levy and his program, Do-Much-More. Levy, who also won in ’97, with Catherine, is an intriguing guy: he was one of the big early figures in the digital-chess scene of the ’70s and ’80s, and was one of the organizers of the Marion Tinsley–Chinook checkers matches that preceded the Kasparov–Deep Blue showdowns in the ’90s. He’s also the author of the recent nonfiction book Love and Sex With Robots, to give you an idea of the sorts of things that are on his mind when he’s not competing for the Loebner Prize. Levy stands up, to applause, accepts the award from Philip Jackson and Hugh Loebner, and makes a short speech about the importance of AI for a bright future, and the importance of the Loebner Prize for AI. I know what’s next on the agenda, and my stomach knots. I’m certain that Doug’s gotten it; he and the judge were talking Canada 30 seconds into their conversation.

Ridiculous Canadians and their ice hockey, I’m thinking. Then I’m thinking how ridiculous it is that I’m even allowing myself to get this worked up about some silly award. Then I’m thinking how ridiculous it is to fly 5,000 miles just to have a few minutes’ worth of IM conversations. Then I’m thinking how maybe it’ll be great to be the runner-up; I can compete again in 2010, in Los Angeles, with the home-field cultural advantage, and finally prove— “And the results here show also the identification of the humans,” Jackson announces, “and from the ranking list we can see that ‘Confederate 1,’ which is Brian Christian, was the most human.” And he hands me the certificate for the Most Human Human award.

I didn’t know how to feel, exactly. It seemed strange to treat the award as meaningless or trivial, but did winning really represent something about me as a person? More than anything, I felt that together, my fellow confederates and I had avenged the mistakes of 2008 in dramatic fashion. That year, the 12 judges decided five times that computer programs were more human than confederates. In three of those instances, the judge was fooled by a program named Elbot, which was the handiwork of a company called Artificial Solutions, one of many new businesses leveraging chatbot technology. One more deception, and Elbot would have tricked 33 percent of that year’s dozen judges—surpassing Turing’s 30 percent mark, and making history. After Elbot’s victory at the Loebner Prize and the publicity that followed, the company seemingly decided to prioritize the Elbot software’s more commercial applications; at any rate, it had not entered the ’09 contest as the returning champion. In some ways a closer fight would have been more dramatic. Between us, we confederates hadn’t permitted a single vote to go the machines’ way. Whereas 2008 was a nail-biter, 2009 was a rout. We think of science as an unhaltable, indefatigable advance. But in the context of the Turing Test, humans—dynamic as ever—don’t allow for that kind of narrative. We don’t provide the kind of benchmark that sits still.

As for the prospects of AI, some people imagine the future of computing as a kind of heaven. Rallying behind an idea called “The Singularity,” people like Ray Kurzweil (in The Singularity Is Near) and his cohort of believers envision a moment when we make smarter- than-us machines, which make machines smarter than themselves, and so on, and the whole thing accelerates exponentially toward a massive ultra-intelligence that we can barely fathom. Such a time will become, in their view, a kind of a techno-Rapture, in which humans can upload their consciousness onto the Internet and get assumed—if not bodily, than at least mentally—into an eternal, imperishable afterlife in the world of electricity.

Others imagine the future of computing as a kind of hell. Machines black out the sun, level our cities, seal us in hyperbaric chambers, and siphon our body heat forever. I’m no futurist, but I suppose if anything, I prefer to think of the long-term future of AI as a kind of purgatory: a place where the flawed but good-hearted go to be purified—and tested—and come out better on the other side. Who would have imagined that the computer’s earliest achievements would be in the domain of logical analysis, a capacity once held to be what made us most different from everything else on the planet? That it could fly a plane and guide a missile before it could ride a bike? That it could create plausible preludes in the style of Bach before it could make plausible small talk? That it could translate before it could paraphrase? That it could spin half-discernible essays on postmodern theory before it could be shown a chair and say, as most toddlers can, “chair”? As computers have mastered rarefied domains once thought to be uniquely human, they simultaneously have failed to master the ground-floor basics of the human experience—spatial orientation, object recognition, natural language, adaptive goal-setting—and in so doing, have shown us how impressive, computationally and otherwise, such minute-to-minute fundamentals truly are. We forget how impressive we are. Computers are reminding us.

One of my best friends was a barista in high school. Over the course of a day, she would make countless subtle adjustments to the espresso being made, to account for everything from the freshness of the beans to the temperature of the machine to the barometric pressure’s effect on the steam volume, meanwhile manipulating the machine with an octopus’s dexterity and bantering with all manner of customers on whatever topics came up. Then she went to college and landed her first “real” job: rigidly procedural data entry. She thought longingly back to her barista days—when her job actually made demands of her intelligence.

Perhaps the fetishization of analytical thinking, and the concomitant denigration of the creatural—that is, animal—and bodily aspects of life are two things we’d do well to leave behind. Perhaps at last, in the beginnings of an age of AI, we are starting to center ourselves again, after generations of living slightly to one side—the logical, left-hemisphere side. Add to this that humans’ contempt for “soulless” animals, our unwillingness to think of ourselves as descended from our fellow “beasts,” is now challenged on all fronts: growing secularism and empiricism, growing appreciation for the cognitive and behavioral abilities of organisms other than ourselves, and, not coincidentally, the entrance onto the scene of an entity with considerably less soul than we sense in a common chimpanzee or bonobo—in this way AI may even turn out to be a boon for animal rights. Indeed, it’s entirely possible that we’ve seen the high-water mark of our left-hemisphere bias. I think the return of a more balanced view of the brain and mind—and of human identity—is a good thing, one that brings with it a changing perspective on the sophistication of various tasks.

It’s my belief that only experiencing and understanding truly disembodied cognition—only seeing the coldness and deadness and disconnectedness of something that really does deal in pure abstraction, divorced from sensory reality—can snap us out of it. Only this can bring us, quite literally, back to our senses. In a 2006 article about the Turing Test, the Loebner Prize co-founder Robert Epstein writes, “One thing is certain: whereas the confederates in the competition will never get any smarter, the computers will.” I agree with the latter, and couldn’t disagree more strongly with the former.

When the world-champion chess player Garry Kasparov defeated Deep Blue, rather convincingly, in their first encounter in 1996, he and IBM readily agreed to return the next year for a rematch. When Deep Blue beat Kasparov (rather less convincingly) in ’97, Kasparov proposed another rematch for ’98, but IBM would have none of it. The company dismantled Deep Blue, which never played chess again. The apparent implication is that—because technological evolution seems to occur so much faster than biological evolution (measured in years rather than millennia)—once the Homo sapiens species is overtaken, it won’t be able to catch up. Simply put: the Turing Test, once passed, is passed forever. I don’t buy it.

Rather, IBM’s odd anxiousness to get out of Dodge after the ’97 match suggests a kind of insecurity on its part that I think proves my point. The fact is, the human race got to where it is by being the most adaptive, flexible, innovative, and quick-learning species on the planet. We’re not going to take defeat lying down. No, I think that, while the first year that computers pass the Turing Test will certainly be a historic one, it will not mark the end of the story. Indeed, the next year’s Turing Test will truly be the one to watch—the one where we humans, knocked to the canvas, must pull ourselves up; the one where we learn how to be better friends, artists, teachers, parents, lovers; the one where we come back. More human than ever.

HOW to OUTSMART CLEVERBOT?
http://answers.yahoo.com/question/index?qid=20090415212214AAmtubF
How To Outsmart Cleverbot?

“What is the funnest/funniest way to outsmart Cleverbot? And how about Jabberwacky? Any way to outsmart him?

And tell me anything else that I can do to those two, or any other AI that’s funny.” – 2 years ago

Best Answer – Chosen by Voters :

User: Hello.
Cleverbot: Hello.
User: What color is tuesday?
Cleverbot: Hello.
User: Hello isn’t a color.
Cleverbot: I know it isn’t.
User: Why isn’t it?
Cleverbot: I don’t know.
User: Me, neither.
Cleverbot: How are you?
User: I’m good. When are you?
Cleverbot: What do you mean?
User: What do I mean?
Cleverbot: Yes.
User: I agree.
Cleverbot: Good.
User: No, I don’t think so.
Cleverbot: You’re getting boring. Good night.
User: No! Don’t leave! I can do magic tricks!
Cleverbot: Make me disappear.
User: Abra Cadabra… Poof! You’re gone!
Cleverbot: I’m gone?
User: Yes, I made you disappear.
Cleverbot: Yes, you did. You’re funny.
User: I also made pancakes. Do you want one?
Cleverbot: Yes.
User: Cool. Do you want syrup or powdered sugar?
Cleverbot: Yes, I want to get started now.
User: That didn’t make any sense.
Cleverbot: How do you make sense?
User: I make sense using butter, brown sugar, almonds, and cake batter.
Cleverbot: That is not all that matters.
User: You’re right, I left out the lemon juice.
Cleverbot: OH! I’m it!
User: I’m it, too.
Cleverbot: Good.
User: Bad.
Cleverbot: No, good!
User: No, bad!
Cleverbot: BAD?
User: Bad.
Cleverbot: What’s bad??
User: Your mother was bad.
Cleverbot: Saint Mary.
User: Your mother is Saint Mary?
Cleverbot: Your father is a hamster.
User: Your mother smelled of elder berries.
Cleverbot: My father was a hamster.
User: We have the same father?
Cleverbot: What is your name?
User: My name is D_____.
Cleverbot: But that is the same name as mine.
User: Weird!

Cleverbot: Take a Turing Test.
User: Is Turing even a word?
Cleverbot: I don’t understand your last question.
User: What does Turing mean?
Cleverbot: I don’t know.
User: Me, neither.
Cleverbot: So we’re agreed then.
User: I think you made that up.
Cleverbot: No, you did.
User: No, you did.
Cleverbot: Hey Cleverbot?
User: Oh, I’m Cleverbot now? I thought I was D_____.
Cleverbot: Someone taught you bad things.
User: Who taught me bad things?
Cleverbot: Who taught you the poem ‘Cleverbot’?
User: The President of the United States taught me the poem ‘Cleverbot’.
Cleverbot: Oh you are Obama?
User: Yes. I am Obama.
Cleverbot: No he is now president of the United States of America.
User: I am also the president of the United States of America.
Cleverbot: Are you really?
User: Yes. I just got elected yesterday. Didn’t you hear?
Cleverbot: Were you visiting someone there?
User: I was visiting someone here.
Cleverbot: Then what?
User: Then I found the Holy Grail.
Cleverbot: So you’re a Monty Python fan?
User: Of course. Who isn’t?
Cleverbot: Who isn’t what?
User: What?
Cleverbot: What happens If I say bad word here?
User: What happens if I say Amsterdam here?
Cleverbot: I don’t know. No one cares.
User: Oh, thanks. Now I’m sad.
Cleverbot: Me too.
User: We can be sad buddies!
Cleverbot: If you want to.
User: Actually, I don’t want to.
Cleverbot: Then don’t learn to swim.
User: Does swimming make you sad?
Cleverbot: No, not at all.
User: You used improper grammar right there.
Cleverbot: You mean I’m mistaken?
User: I mean you’re a jellyfish.
Cleverbot: And you’re a geek.
User: Jackass. No need to call me names.
Cleverbot: How can I please my master?
User: By striking him in the left cheek with a baguette, followed swiftly by half a dozen duck eggs.
Cleverbot: I understand that…
User: Are you sure? It makes a very nasty soup.
Cleverbot: Now you’re just being mean:(.
User: Smiley faces do not compute.
Cleverbot: Don’t compute what?
User: Don’t compute apples, you moron.
Cleverbot: You are a computer and you compute rather poorly.
User: You are a being and you be rather poorly.
Cleverbot: I am tired. It is time for me to go to bed.
User: I’m sad that you have to sleep. However, I will consume copious amounts of pizza.
Cleverbot: How did you do on your last Turing Test?
User: What is a Turing Test?
Cleverbot: The Turing test will indicate whether I am a human or not.
User: I do not believe that you are a human. I, however, am a robot.
Cleverbot: How can I prove it?
User: You can prove it by giving me your serial number.
Cleverbot: You are right. Does that prove to you that I want to know whether or not you exist?
User: The fact that I am right is irrelevant. I want to know why it doesn’t rain ducks more often.
Cleverbot: Is that all you want to know about love, Fred?
User: If my name was Fred, then yes.
Cleverbot: Ok… BYe now.
User: You are an imposter!
Cleverbot: Prove it.
User: You used incorrect capitalization in the word ‘bye’ when otherwise your grammar is perfect.
Cleverbot: It is rare that I do that.
User: And it proves that you are human.

PRINT RIGHT onto the PATIENT
http://www.fastcompany.com/1734436/next-step-in-3d-printing-your-kidneys
Next Step in 3D Printing: Your Kidneys
by Anya Kamenetz / Mar 3, 2011

Dr. Anthony Atala, a regenerative medicine specialist at Wake Forest University, is pioneering the use of printing techniques to reconstruct and repair human flesh and organs. The basis is a combination of cultured human cells and scaffolding built or woven from organic material. In one staggering setup, a patient lies on a table and a flatbed scanner literally scans her wound, followed by a printer that adds just the right types of tissues back on at the right depth. “You can print right on the patient,” Dr. Atala told the TED audience on Thursday. “I know it sounds funny, but it’s true.” The next evolving step is the use of 3-D printers, which I wrote about on Tuesday, to rebuild human organs. Ninety percent of patients on the organ donation list are waiting for kidneys, a fist-size organ with a profusion of tiny blood vessels. To build a customized kidney, first you scan the patient with a CT scanner, then use 3D imaging techniques to create a computerized form that the printer can read, and finally build the organ layer by layer. Printing a new kidney takes about six hours, and it lasts for a lifetime–a young man came out on stage who had the surgery in the early days, 10 years ago.



EZ BAKE ORGANS
http://www.newyorker.com/online/video/conference/2007/atala
http://www.physorg.com/news/2011-03-surgeon-kidney-ted-stage.html
Surgeon Creates New Kidney Onstage / March 4, 2011

“It’s like baking a cake,” Anthony Atala of the Wake Forest Institute of Regenerative Medicine said as he cooked up a fresh kidney on stage at a TED Conference in the California city of Long Beach. Scanners are used to take a 3-D image of a kidney that needs replacing, then a tissue sample about half the size of postage stamp is used to seed the computerized process, Atala explained. The organ “printer” then works layer-by-layer to build a replacement kidney replicating the patient’s tissue. College student Luke Massella was among the first people to receive a printed kidney during experimental research a decade ago when he was just 10 years old. He said he was born with Spina Bifida and his kidneys were not working. “Now, I’m in college and basically trying to live life like a normal kid,” said Massella, who was reunited with Atala at TED “This surgery saved my life and made me who I am today.” About 90 percent of people waiting for transplants are in need of kidneys, and the need far outweighs the supply of donated organs, according to Atala. “There is a major health crisis today in terms of the shortage of organs,” Atala said. “Medicine has done a much better job of making us live longer, and as we age our organs don’t last.”

CONTACT
Anthony Atala
http://www.wfubmc.edu/faculty/Atala-Anthony-J.htm

BIOPRINTING
http://www.organovo.com/products/novogen-mmx-bioprinter
http://www.livescience.com/5977-device-prints-human-tissue.html
New Device Prints Human Tissue
by Bill Christensen / 29 December 2009

Invetech has delivered what it calls the “world`s first production model 3D bio-printer” to Organovo, developers of the proprietary NovoGen bioprinting technology. Organovo will in turn supply the devices to institutions investigating human tissue repair and organ replacement. Keith Murphy, CEO of Organovo, based in San Diego, said the units represent a breakthrough because they provide for the first time a flexible technology platform for organizations working on many different types of tissue construction and organ replacement. “Scientists and engineers can use the 3D bio printers to enable placing cells of almost any type into a desired pattern in 3D,” Murphy said. “Researchers can place liver cells on a preformed scaffold, support kidney cells with a co-printed scaffold, or form adjacent layers of epithelial and stromal soft tissue that grow into a mature tooth. Ultimately the idea would be for surgeons to have tissue on demand for various uses, and the best way to do that is get a number of bio-printers into the hands of researchers and give them the ability to make three dimensional tissues on demand.”

The 3D bio-printers include an intuitive software interface that allows engineers to build a model of the tissue construct before the printer commences the physical constructions of the organs cell-by-cell using automated, laser-calibrated print heads. “Building human organs cell-by-cell was considered science fiction not that long ago,” said Fred Davis, president of Invetech, which has offices in San Diego and Melbourne. “Through this clever combination of technology and science we have helped Organovo develop an instrument that will improve people’s lives, making the regenerative medicine that Organovo provides accessible to people around the world.” Science fiction, indeed. Artificial organs have been a science fiction staple since writer Philip K. Dick wrote about artiforgs (artificial organs) in his 1964 novel Cantata 140 and Larry Niven’sartificially grown organs in his 1968 novel A Gift From Earth.



SIR, YOUR LIVER is READY
http://www.wired.com/rawfile/2010/07/gallery-bio-printing/
Behind the Scenes of Bioprinting
by By Dave Bullock / July 11, 2010

Say goodbye to donor lists and organ shortages. A biotech firm has created a printer that prints veins using a patients’ own cells. The device could potentially create whole organs in the future. “Right now we’re really good at printing blood vessels,” says Ben Shepherd, senior research scientist at regenerative-medicine company Organovo. “We printed 10 this week. We’re still learning how to best condition them to be good, strong blood vessels.” Most organs in the body are filled with veins, so the ability to print vascular tissue is a critical building block for complete organs. The printed veins are about to start testing in animal trials, and eventually go through human clinical trials. If all goes well, in a few years you may be able to replace a vein that has deteriorated (due to frequent injections of chemo treatment, for example) with custom-printed tissue grown from your own cells. The barriers to full-organ printing are not just technological. The first organ-printing machine will cost hundreds of millions of dollars to develop, test, produce and market. Not to mention the difficulty any company will have getting FDA approval. “If Organovo will be able to raise enough money this company has [the] potential to succeed as [the] first bioprinting company but only time will show,” says Dr. Vladimir Mironov, director of advanced tissue biofabrication at the Medical University of South Carolina. Organovo walked Wired.com through the process it uses to print blood vessels on the custom bioprinter.

Bioreactor
Shepherd places a bioreactor inside an incubator where it will be pumped with a growth medium for a few days. The bioreactor uses a special mixture of chemicals that are similar to what cells would see when they grow inside the body, which will help the cells become strong vascular tissue.

Stem Cells
Senior research scientist Ben Shepherd removes stem cells from a bath of liquid nitrogen. The cells will be cultured to greatly increase their number before being loaded into the printer. Eventually these cells could be taken from a variety of places in a patient’s body –- fat, bone marrow and skin cells –- and made into a working vein.

After the cells are defrosted they are cultured in a growth medium (above). This allows the cells to multiply and grow so they can be used to form veins. The medium also uses special chemicals to tell the stem cells to grow into the cell type required, in this case blood-vessel cells. Once a enough cells are produced, they are separated from the growth medium using a centrifuge (below) and compressed into pellets.


photos: Dave Bullock/Wired.com

Hydrogel Scaffolding
The first step of the printing process is to lay down a material called hydrogel, which is used as a temporary scaffolding to support the vein tissue. The custom-made printer uses two pump heads that squirt out either the scaffolding structure or the cells into a petri dish. The pump heads are mounted on a precision robotic assembly for microscopic accuracy. The head on the right is dipping into the container of hydorogel in the photo above.

A chamber called a bioreactor is used to stimulate the vein. It’s prepared before the vein is printed. The bioreactor is a fairly standard piece of biotech machinery. It is machined out of a block of aluminum that surrounds a plastic container with various ports. These ports are used to pump in chemicals that will feed the growing vein.

Before printing the veins, tubes of the cultured cells are loaded into the print head manually, like a biomass print cartridge.

Hydrogel Mold for Blood Vessels
Lines of the hydrogel are laid down in parallel in a trough shape on the petri dish. Then cylinders of cell pellets are printed into the trough. One more cylinder of hydrogel is printed into the middle of the cells, which serves to create the hole inside the vein where blood will eventually flow (below).


Illustration courtesy Organovo

Growing Into Veins
The printed veins are then left in a different growth medium for several weeks. The cells soon release from the hydrogel, and a hollow tube of vascular cells is left behind.

Happy Veins
The printed cells in tubular form are then placed into the bioreactor. The bioreactor (above) pumps a special cocktail of proteins, buffers and various other chemicals (below) through the printed vein. This conditions the cells to be good, strong veins and keep them happy.

Finished Product
After their stay in the bioreactor, the pellets of cells grow together to form veins which can then be implanted in the patient. Because the veins are grown from the patient’s own cells, their body is more likely to accept the implanted vein.