RoboJelly diagram
A computer-aided model of Robojelly shows the vehicle’s two bell-like structures.

Undersea Vehicle Powered by Hydrogen and Oxygen

Researchers at The University of Texas at Dallas and Virginia Tech have created an undersea vehicle inspired by the common jellyfish that runs on renewable energy and could be used in ocean rescue and surveillance missions. In a study published this week in Smart Materials and Structures, scientists created a robotic jellyfish, dubbed Robojelly, that feeds off hydrogen and oxygen gases found in water. “We’ve created an underwater robot that doesn’t need batteries or electricity,” said Dr. Yonas Tadesse, assistant professor of mechanical engineering at UT Dallas and lead author of the study. “The only waste released as it travels is more water.”

Engineers and scientists have increasingly turned to nature for inspiration when creating new technologies. The simple yet powerful movement of the moon jellyfish made it an appealing animal to simulate. The Robojelly consists of two bell-like structures made of silicone that fold like an umbrella. Connecting the umbrella are muscles that contract to move. In this study, researchers upgraded the original, battery-powered Robojelly to be self-powered. They did that through a combination of high-tech materials, including artificial muscles that contract when heated. These muscles are made of a nickel-titanium alloy wrapped in carbon nanotubes, coated with platinum and housed in a pipe. As the mixture of hydrogen and oxygen encounters the platinum, heat and water vapor are created. That heat causes a contraction that moves the muscles of the device, pumping out the water and starting the cycle again. “It could stay underwater and refuel itself while it is performing surveillance,” Tadesse said. In addition to military surveillance, Tadesse said, the device could be used to detect pollutants in water. Tadesse said the next step would be refining the legs of the devices to move independently, allowing the Robojelly to travel in more than one direction.

{Dr. Ray Baughman, the Robert A. Welch Distinguished Chair in Chemistry and director of the Alan G. MacDiarmid NanoTech Institute at UT Dallas, was an author of the study. The research was a collaboration between researchers at the University of Texas at Dallas and Virginia Polytechnic Institute and State University, Virginia Tech, including Dr. Shashank Priya, the study’s senior author. The study was funded by the Office of Naval Research.}

The Robojelly, shown here out of water, has an outer structure made out of silicone.

When the Earth is uninhabited, this robotic jellyfish will still be roaming the seas
by Esther Inglis-Arkell  /  March 21, 2012

Virginia Tech and the University of Texas at Dallas have claimed their place as the leading purveyor of robot-based nautical doom with robojelly, a robot that simulates the look and the move of a cnidarian. Anyone who has seen jellies knows that they move with a repetitive contraction of their bells, or their transparent outer shells. This movement requires two motions: a contraction and a snap back to the original position. For this carbon nanotubule jellyfish, the engineers used a commercially available, shape memory, titanium-and-nickel alloy to mimic the snap back. The contraction was harder to engineer. The Robojelly needed muscles, so researchers used platinum-covered carbon nanotubes to cover the shape memory sheets. When hydrogen and oxygen gases in the water made contact with the platinum — which is in the form of black powder — they create a reaction that gives off heat. This causes the nickel-titanium alloy to contract. And since hydrogen and oxygen are in seawater, these jellies could roam the oceans indefinitely, with possible future tinkering.

The deformation of the bell, powered by this reaction, was found to be a modest 13.5%. An electro-robojelly can manage 29% and a biological one can get an impressive 42%, but neither of the latter can power themselves until judgment day.

Yonas Tadesse
email : yonas.tadesse [at] utdallas [dot] edu


“Artificial muscles powered by a renewable energy source are desired for joint articulation in bio-inspired autonomous systems. In this study, a robotic underwater vehicle, inspired by jellyfish, was designed to be actuated by a chemical fuel source. The fuel-powered muscles presented in this work comprise nano-platinum catalyst-coated multi-wall carbon nanotube (MWCNT) sheets, wrapped on the surface of nickel–titanium (NiTi) shape memory alloy (SMA). As a mixture of oxygen and hydrogen gases makes contact with the platinum, the resulting exothermic reaction activates the nickel–titanium (NiTi)-based SMA. The MWCNT sheets serve as a support for the platinum particles and enhance the heat transfer due to the high thermal conductivity between the composite and the SMA. A hydrogen and oxygen fuel source could potentially provide higher power density than electrical sources. Several vehicle designs were considered and a peripheral SMA configuration under the robotic bell was chosen as the best arrangement. Constitutive equations combined with thermodynamic modeling were developed to understand the influence of system parameters that affect the overall actuation behavior of the fuel-powered SMA. The model is based on the changes in entropy of the hydrogen and oxygen fuel on the composite actuator within a channel. The specific heat capacity is the dominant factor controlling the width of the strain for various pulse widths of fuel delivery. Both theoretical and experimental strains for different diameter (100 and 150 µm) SMA/MWCNT/Pt fuel-powered muscles with dead weight attached at the end exhibited the highest magnitude under 450 ms of fuel delivery within 1.6 mm diameter conduit size. Fuel-powered bell deformation of 13.5% was found to be comparable to that of electrically powered (29%) and natural jellyfish (42%).”


Japanese researcher draws inspiration from slime mold cognition
by Christopher Mims  / 03/09/2012

A new blob-like robot described in the journal Advanced Robotics uses springs, feet, “protoplasm” and a distributed nervous system to move in a manner inspired by the slime mold Physarum polycepharumWatch it ooze across a flat surface, The Blob style:

Skip to 1:00 if you just want to be creeped out by its life-like quivering. (And if anyone can explain why, aside from wanting to kill its creepiness, the researcher stabs it with a pen-knife at 1:40, let me know in the comments.) Researcher Takuya Umedachi of Hiroshima University has been perfecting his blob-bot for years, starting with early prototypes that used springs but lacked an air-filled bladder.

This model didn’t work nearly as well, demonstrating, I guess, the need for a fluid or air-filled sack when you’re going to project your soft-bodied self in a new direction. (Hydraulic pressure is, after all, how our tongues work.) Umedachi modeled his latest version on the “true” slime mold, which has been shown to achieve a “human-like” decision-making capacity through properties emerging from the interactions of its individual spores. Slime molds appear to have general computational abilities, and you’ve probably heard that they can solve mazes. Here’s what they look like in the wild.

Yellow slime mold (detail) by frankenstoen

Yellow slime mold by frankenstoen

Soft-bodied robots can do things their rigid, insectoid brethren can’t, like worm their way into tight spots and bounce back in the face of physical insult. Umedachi’s goal isn’t simply to create a new kind of locomotion, however. He’s exploring the way in which robots that lack a centralized command center — i.e. a brain — can accomplish things anyway. Slime molds are a perfect model for this sort of thing, because they don’t even have the primitive neural nets that characterize the coordinated swimming and feeding actions in jellyfish.

From the abstract:

A fully decentralized control using coupled oscillators with a completely local sensory feedback mechanism is realized by exploiting the global physical interaction between the body parts stemming from the fluid circuit. The experimental results show that this robot exhibits adaptive locomotion without relying on any hierarchical structure. The results obtained are expected to shed new light on the design scheme for autonomous decentralized control systems.

Simulations indicate that the robot should be highly adaptable to deformation — i.e., squeezing through tight spaces.

For a full account of the ways that Umedachi plans to reproduce the world’s most primitive form of cognition in robots, here’s a 2011 talk on the subject by the professor himself.


Japanese researchers have created a hand-held gun that can jam the words of speakers who are more than 30 meters (100ft) away. The gun has two purposes, according to the researchers: At its most basic, this gun could be used in libraries and other quiet spaces to stop people from speaking — but its second application is a lot more chilling.

The researchers were looking for a way to stop “louder, stronger” voices from saying more than their fair share in conversation. The paper reads: “We have to establish and obey rules for proper turn-taking when speaking. However, some people tend to lengthen their turns or deliberately interrupt other people when it is their turn in order to establish their presence rather than achieve more fruitful discussions. Furthermore, some people tend to jeer at speakers to invalidate their speech.” In other words, this speech-jamming gun was built to enforce “proper” conversations.

The gun works by listening in with a directional microphone, and then, after a short delay of around 0.2 seconds, playing it back with a directional speaker. This triggers an effect that psychologists call Delayed Auditory Feedback (DAF), which has long been known to interrupt your speech (you might’ve experienced the same effect if you’ve ever heard your own voice echoing through Skype or another voice comms program). According to the researchers, DAF doesn’t cause physical discomfort, but the fact that you’re unable to talk is obviously quite stressful.

Suffice it to say, if you’re a firm believer in free speech, you should now be experiencing a deafening cacophony of alarm bells. Let me illustrate a few examples of how this speech-jamming gun could be used. At a political rally, an audience member could completely lock down Santorum, Romney, Paul, or Obama from speaking. On the flip side, a totalitarian state could point the speech jammers at the audienceto shut them up. Likewise, when a celebrity or public figure appears on a live TV show, his contract could read “the audience must be silenced with speech jammers.”

Then there’s Harrison Bergeron, one of my favorite short stories by Kurt Vonnegut. In the story’s dystopian universe, everyone wears “handicaps” to ensure perfect social equality. Strong people must lug around heavy weights, beautiful people must wear masks, and intelligent people must wear headphones that play a huge blast of sound every few seconds, interrupting your thoughts. The more intelligent you are, the more regular the blasts.

Back here in our universe, it’s not hard to imagine a future where we are outfitted with a variety of implanted electronics or full-blown bionic organs. Just last week we wrote about Google’s upcoming augmented-reality glasses, which will obviously have built-in earbuds. Late last year we covered bionic eyesthat can communicate directly with the brain, and bionic ears and noses can’t be far off.

In short, imagine if a runaway mega-corporation or government gains control of these earbuds. Not only could the intelligence-destroying blasts from Harrison Bergeron come to pass, but with Delayed Auditory Feedback it would be possible to render the entire population mute. Well, actually, that’s a lie: Apparently DAF doesn’t work with utterances like “ahhh!” or “boooo!” or other non-wordy constructs. So, basically, we’d all be reduced to communicating with grunts and gestures.

How to Build a Speech-Jamming Gun
Japanese researchers build a gun capable of stopping speakers in mid-sentence / 03/01/2012

The drone of speakers who won’t stop is an inevitable experience at conferences, meetings, cinemas, and public libraries. Today, Kazutaka Kurihara at the National Institute of Advanced Industrial Science and Technology in Tskuba and Koji Tsukada at Ochanomizu University, both in Japan, present a radical solution: a speech-jamming device that forces recalcitrant speakers into submission.

The idea is simple. Psychologists have known for some years that it is almost impossible to speak when your words are replayed to you with a delay of a fraction of a second. Kurihara and Tsukada have simply built a handheld device consisting of a microphone and a speaker that does just that: it records a person’s voice and replays it to them with a delay of about 0.2 seconds. The microphone and speaker are directional so the device can be aimed at a speaker from a distance, like a gun.

In tests, Kurihara and Tsukada say their speech jamming gun works well: “The system can disturb remote people’s speech without any physical discomfort.” Their tests also identify some curious phenomena. They say the gun is more effective when the delay varies in time and more effective against speech that involves reading aloud than against spontaneous monologue.

Kurihara and Tsukada make no claims about the commercial potential of their device but list various aplications. They say it could be used to maintain silence in public libraries and to “facilitate discussion” in group meetings. “We have to establish and obey rules for proper turn-taking when speaking,” they say. That has important implications. “There are still many cases in which the negative aspects of speech become a barrier to the peaceful resolution of conflicts, ” they point out.

Kazutaka Kurihara
email : k-kurihara [ at ]

Koji Tsukada
email : tsuka [at] mobiquitous [dot] com

SpeechJammer: A System Utilizing Artificial Speech Disturbance with Delayed Auditory Feedback
by Kazutaka Kurihara and Koji Tsukada / 28 Feb 2012

“In this paper we report on a system, “SpeechJammer”, which can be used to disturb people’s speech. In general, human speech is jammed by giving back to the speakers their own utterances at a delay of a few hundred milliseconds. This effect can disturb people without any physical discomfort, and disappears immediately by stop speaking. Furthermore, this effect does not involve anyone but the speaker. We utilize this phenomenon and implemented two prototype versions by combining a direction-sensitive microphone and a direction-sensitive speaker, enabling the speech of a specific person to be disturbed. We discuss practical application scenarios of the system, such as facilitating and controlling discussions. Finally, we argue what system parameters should be examined in detail in future formal studies based on the lessons learned from our preliminary study.”


Two Japanese researchers recently introduced a prototype for a device they call a SpeechJammer that can literally “jam” someone’s voice — effectively stopping them from talking. Now they’ve released a video of the device in action. “We have to establish and obey rules for proper turn-taking,” write Kazutaka Kurihara and Koji Tsukada in their article on the SpeechJammer (PDF). “However, some people tend to lengthen their turns or deliberately disrupt other people when it is their turn … rather than achieve more fruitful discussions.”

The researchers released the video after their paper went viral Thursday, to the authors’ apparent surprise. “Do you know why our project is suddenly becoming hot now?” asked Kurihara, a research scientist at the National Institute of Advanced Industrial Science and Technology in Tsukuba, in an e-mail exchange with (Kurihara’s partner Tsukada is an assistant professor at Ochanomizu University in Tokyo.)

The design of the SpeechJammer is deceptively simple. It consists of a direction-sensitive microphone and a direction-sensitive speaker, a motherboard, a distance sensor and some relatively straightforward code. The concept is simple, too — it operates on the well-studied principle of delayed auditory feedback. By playing someone’s voice back to them, at a slight delay (around 200 milliseconds), you can jam a person’s speech.

Sonic devices have popped up in pop culture in the past. In sci-fi author J.G. Ballard’s short story “The Sound-Sweep,” published in 1960, a vacuum cleaner called a “sonovac” sweeps up the debris of old sounds. The wily German composer Karlheinz Stockhausen had plans for a “sound swallower,” which would cancel unwanted sounds in the environment using the acoustic principle of destructive interference. And in 1984 German film Decoder, special yellow cassette tapes play “anti-Muzak” that destroys the lulling tones of Muzak, stimulating diners at a fast-food restaurant to throw up en masse and start rioting.

But instead of sci-fi, the Japanese researchers behind the SpeechJammer looked to medical devices used to help people with speech problems. Delayed auditory feedback, or DAF, devices have been used to help stutterers for decades. If a stutterer hears his own voice at a slight delay, stuttering often improves. But if a non-stutterer uses a DAF device designed to help stutterers, he can start stuttering — and the effect is more pronounced if the delay is longer, up to a certain point.

“We utilized DAF to develop a device that can jam remote physically unimpaired people’s speech whether they want it or not,” write the researchers. “[The] device possesses one characteristic that is different from the usual medical DAF device; namely, the microphone and speaker are located distant from the target.”

Being at a distance from the target means it’s possible to aim the device at people who are several feet away — sort of like a TV B-Gone, but for people. Bothered by what someone at a meeting is saying? Point the SpeechJammer at him. Can’t stand your nattering in-laws? Time for the SpeechJammer. In the wrong hands — criminals, for instance, or repressive governments — the device could have potentially sinister applications. For now, it remains a prototype.


“One day I just came by a science museum and enjoyed a demonstration about Delayed Auditory Feedback (DAF) at [the] cognitive science corner,” says Kurihara. “When I spoke to a microphone, my voice came back to me after a few hundred millisecond delay. Then, I could not continue to speak any more. That’s fun!”

Kurihara soon realized his adventures in the science museum could be applicable to other fields. He was already interested in developing a system that “controls appropriate turn-taking at discussions.” The science museum visit was his “aha!” moment. “Then I came up with the gun-type SpeechJammer idea utilizing DAF,” says Kurihara. “That’s the destiny.”

Kurihara enlisted the talents of Koji Tsukada, an assistant professor at Tokyo’s Ochanamizu University who he calls “the gadget master.” Tsukada has been involved in a number of strange and intriguing projects, including the LunchCommunicator, a “lunchbox-type device which supports communication between family members”; the SmartMakeupSystem, which “helps users find new makeup methods for use with their daily cosmetics”; and the EaTheremin, a “fork-type instrument that enables users to play various sounds by eating foods”.

Tsukada introduced Kurihara to a parametric speaker kit, which they could use to convey sound in a very direction-sensitive way. “After I explained him my idea, he soon agreed to join my project,” says Kurihara. “It was a marriage between science and gadgets!”

As for SpeechJammer’s potentially sinister uses? “We hope SpeechJammer is used for building the peaceful world,” says Kurihara. The world can only hope.


Hackers plan space satellites to combat censorship
by David Meyer / 4 January 2012

The scheme was outlined at the Chaos Communication Congress in Berlin. The project’s organisers said the Hackerspace Global Grid will also involve developing a grid of ground stations to track and communicate with the satellites. Longer term they hope to help put an amateur astronaut on the moon. Hobbyists have already put a few small satellites into orbit – usually only for brief periods of time – but tracking the devices has proved difficult for low-budget projects. The hacker activist Nick Farr first put out calls for people to contribute to the project in August. He said that the increasing threat of internet censorship had motivated the project. “The first goal is an uncensorable internet in space. Let’s take the internet out of the control of terrestrial entities,” Mr Farr said. He cited the proposed Stop Online Piracy Act (SOPA) in the United States as an example of the kind of threat facing online freedom. If passed, the act would allow for some sites to be blocked on copyright grounds.

Beyond balloons
Although space missions have been the preserve of national agencies and large companies, amateur enthusiasts have launched objects into the heavens. High-altitude balloons have also been used to place cameras and other equipment into what is termed “near space”. The balloons can linger for extended amounts of time – but are not suitable for satellites. The amateur radio satellite Arissat-1 was deployed into low earth orbit last year via a spacewalk by two Russian cosmonauts from the International Space Station as part of an educational project. Students and academics have also launched other objects by piggybacking official rocket launches. However, these devices have often proved tricky to pinpoint precisely from the ground. According to Armin Bauer, a 26-year-old enthusiast from Stuttgart who is working on the Hackerspace Global Grid, this is largely due to lack of funding. “Professionals can track satellites from ground stations, but usually they don’t have to because, if you pay a large sum [to send the satellite up on a rocket], they put it in an exact place,” Mr Bauer said. In the long run, a wider hacker aerospace project aims to put an amateur astronaut onto the moon within the next 23 years. “It is very ambitious so we said let’s try something smaller first,” Mr Bauer added.

Ground network
The Berlin conference was the latest meeting held by the Chaos Computer Club, a decades-old German hacker group that has proven influential not only for those interested in exploiting or improving computer security, but also for people who enjoy tinkering with hardware and software. When Mr Farr called for contributions to Hackerspace, Mr Bauer and others decided to concentrate on the communications infrastructure aspect of the scheme. He and his teammates are working on their part of the project together with Constellation, an existing German aerospace research initiative that mostly consists of interlinked student projects. In the open-source spirit of Hackerspace, Mr Bauer and some friends came up with the idea of a distributed network of low-cost ground stations that can be bought or built by individuals. Used together in a global network, these stations would be able to pinpoint satellites at any given time, while also making it easier and more reliable for fast-moving satellites to send data back to earth. “It’s kind of a reverse GPS,” Mr Bauer said. “GPS uses satellites to calculate where we are, and this tells us where the satellites are. We would use GPS co-ordinates but also improve on them by using fixed sites in precisely-known locations.” Mr Bauer said the team would have three prototype ground stations in place in the first half of 2012, and hoped to give away some working models at the next Chaos Communication Congress in a year’s time. They would also sell the devices on a non-profit basis. “We’re aiming for 100 euros (£84) per ground station. That is the amount people tell us they would be willing to spend,” Mr Bauer added.

Experts say the satellite project is feasible, but could be restricted by technical limitations. “Low earth orbit satellites such as have been launched by amateurs so far, do not stay in a single place but rather orbit, typically every 90 minutes,” said Prof Alan Woodward from the computing department at the University of Surrey. “That’s not to say they can’t be used for communications but obviously only for the relatively brief periods that they are in your view. It’s difficult to see how such satellites could be used as a viable communications grid other than in bursts, even if there were a significant number in your constellation.” This problem could be avoided if the hackers managed to put their satellites into geostationary orbits above the equator. This would allow them to match the earth’s movement and appear to be motionless when viewed from the ground. However, this would pose a different problem. “It means that they are so far from earth that there is an appreciable delay on any signal, which can interfere with certain Internet applications,” Prof Woodward said. “There is also an interesting legal dimension in that outer space is not governed by the countries over which it floats. So, theoretically it could be a place for illegal communication to thrive. However, the corollary is that any country could take the law into their own hands and disable the satellites.”

Need for knowledge
Apart from the ground station scheme, other aspects of the Hackerspace project that are being worked on include the development of new electronics that can survive in space, and the launch vehicles that can get them there in the first place. According to Mr Farr, the “only motive” of the Hackerspace Global Grid is knowledge. He said many participants are frustrated that no person has been sent past low Earth orbit since the Apollo 17 mission in 1972. “This [hacker] community can put humanity back in space in a meaningful way,” Farr said. “The goal is to get back to where we were in the 1970s. Hackers find it offensive that we’ve had the technology since before many of us were born and we haven’t gone back.” Asked whether some might see negative security implications in the idea of establishing a hacker presence in space, Farr said the only downside would be that “people might not be able to censor your internet. Hackers are about open information,” Farr added. “We believe communication is a human right.”


by David Meyer  /  January 3, 2012

Hackers have announced work on a ground station scheme that would make amateur satellites more viable, as part of an aerospace scheme that ultimately aims for the moon. The Hackerspace Global Grid (HGG) project hopes to make it possible for amateurs to more accurately track the home-brewed satellites. As these devices tend to be launched by balloon, they are not placed at a precise point in orbit as professional satellites deployed by rocket usually are. Armin Bauer, one of the three German hobbyists involved in the HGG, said at the Chaos Communication Congress in Berlin that the system involved a reversal of the standard GPS technique. The scheme was announced at the event, which is Europe’s largest hacker conference. “GPS uses satellites to calculate where we are, and this tells us where the satellites are,” Bauer said on Friday, according to the BBC. “We would use GPS co-ordinates but also improve on them by using fixed sites in precisely-known locations.”

According to the HGG website, enthusiasts would site the ground stations using coordinates not only from the US’s GPS system, but also those from the EU’s Galileo, Russia’s GLONASS and ground surveys. A major aim of the wider ‘Hacker Space Program’ is to create a satellite system for internet communication that is uncensorable by any country. The hackers also want to put someone on the moon by 2034 — something that has not been done since the Apollo 17 mission 39 years ago. Bauer described the moon mission as “very ambitious”. As for the anti-censorship aspects of the scheme, the HGG team said on their site that they are “not yet in a technical position to discuss details”. They also noted that the modular ground stations, which are intended to work out at a non-profit sales price of €100 (£84) each, would be able to work without the internet. “Then you will have to deploy four receiver stations and connect them to your laptop(s) or collect all storage media added to them, where all received data is stored on,” the team wrote. “Then you have to manage the data handling and processing by your own.” However, internet connectivity is the plan for most of the HGG’s usage. The team is working on the project alongside Constellation, an German aerospace research platform for academics that would use the distributed network to derive crucial data.

According to Bauer and his colleagues, the internet connectivity would be of “bare minimum” bandwidth that would be enough to keep basic communications going if needed. “The first step is establishing a means of accurate synchronisation for the distributed network,” the team explained. “Next up are building various receiver modules (ADS-B, amateur satellites, etc) and data processing of received signals. A communication/control channel (read: sending data) is a future possibility but there are no fixed plans on how this could be implemented yet.” The HGG team hopes to have working prototypes in the first half of the year, with production units ready for distribution by the end of 2012. These would be sold, but people would be able to build their own as well. If the Hacker Space Program really does take off, the satellites would be out of any country’s legal jurisdiction, but this would also leave any country that is capable of doing so free to disable them in some way. The HGG team admitted on their site that there would nothing they could do to stop this happening. “Since we don’t have actual satellites yet, this falls in the category of problems we’re going to solve once they occur,” they wrote. “We’re doing this because we want to and because it’s fun. We’re trying to concentrate on reasons why this will work, not why it won’t.”

Building a Distributed Satellite Ground Station Network – A Call To Arms
Hackers need satellites. Hackers need internet over satellites. Satellites require ground stations. Let’s build them!

As proposed by Nick Farr et al at CCCamp11, we – the hacker community – are in desperate need for our own communication infrastructure. So here we are, answering the call for the Hacker Space Program with our proposal of a distributed satellite communications ground station network. An affordable way to bring satellite communications to a hackerspace near you. We’re proposing a multi-step approach to work towards this goal by setting up a distributed network of ground stations which will ensure a 24/7 communication window – first tracking, then communicating with satellites. The current state of a proof of concept implementation will be presented. This is a project closely related to the academic femto-satellite movement, ham radio, Constellation@Home.

The area of small satellites (femto-satellite <0.1 kg up to mini-satellite 100-500 kg) is currently pressed forward by Universities and enables scientific research at a small budget. Gathered data, both scientific and operational, requires communication between satellites and ground stations as well as to the final recipients of the data. One either has to establish own transmission stations or rent already existing stations. The project “distributed ground station” is an extension to the project which will offer, at its final expansion state, the ability to receive data from satellites and relay them to the final recepients. It is therefore proposed that a world-wide distributed network of antennas is to be set up which will be connected via the internet allowing the forwarding of received signals to a central server which will in turn forward signals to further recepients. Individual antennas will be set up by volunteers (Citizen Scientists) and partner institutions (Universities, institutes, companies). The core objective of the project is to develop an affordable hardware platform (antenna and receiver) to be connected to home computers as well as the required software. This platform should enable everyone to receive signals from femto-satellites at a budget and in doing so, eradicating black patches where there is currently no ground station to receive signals of satellites passing over-head. Emphasise is put on contributions by volunteers and ham radio operators who can contribute both passively by setting up a receiver station or actively by shaping the project making it a community driven effort powered by open-source hardware and applications.

Purposes The distributed ground stations will enable many different uses. Using distributed ground stations one could receive beacon signals of satellites and triangulate their position and trajectory. It would therefore be possible to determine the kepler elements right after launching of a new satellite without having to rely on official reports made at low frequency. Beacon tracking is also not limited to just satellites but can be used to track other objects like weather balloons and areal drones and record their flight paths. Additionally, beacon signals (sender ID, time, transmission power) could be augmented with house-keeping data to allow troubleshooting in cases where a main data feed is interrupted. Details regarding the protocol and maximum data packet length are to be defined during the feasibility study phase. Furthermore, distributed ground stations can be used as “data dumping” receivers. This can be used to reduce load on the main ground station as well as to more quickly distribute data to final recipients. The FunCube project, an out-reach project to schools, is already using a similar approach. Another expansion stage would be increasing the bandwidth of the individual receivers. As a side-effect, distributed ground station could also be used to analyse meteorite scattering and study effects in the ionosphere by having a ground-based sender with a known beacon signal to be reflected off meteorites and/or the iononosphere and in turn received by the distributed ground stations. Depending on the frequency used further applications in the field of atmospheric research, eg. local and regional properties of the air and storm clouds, can be imagined. Depending on local laws and guidelines, antennas could also be used to transmit signals. The concept suggests the following expansion stages:

  1. Feasibility study for the individual expansion stages
  2. Beacon-Tracking and sender triangulation
  3. Low-bandwidth satellite-data receiver (up to 10 Kbit/s)
  4. High-bandwidth satellite-data receiver (up to 10 Mbit/s)
  5. Support for data transmission Each stage is again split up into sub-projects to deal with hardware and software design and develoment, prototyping, testing and batch/mass production, Network The networking concept demands that all distributed ground stations are to be connected via the internet. This can be achieved using the Constellation platform. Constellation is a distributed computing project used already for various simulations related to aerospace applications. The system is based on computation power donated by volunteers which is combined to effectively build a world-wide distributed super-computer. The software used to do this is BOINC (Berkeley Open Infrastructure for Network Computing) which also offers support for additional hardware to eg. establish a sensor network. Another BOINC-project is the Quake Quatcher Network which is using accelleration sensors built into laptops or custom USB-dongles to detected earthquakes. Constellation could be enhanced to allow use of the distributed ground station hardware. Constellation is an academic student group of the DGLR (german aerospace society) at Stuttgart University and is supported by e.V and Selfnet e.V.. Ham radio and volunteers Special consideration is given to the ham radio community. Femto-satellites make use of the ham radio bands in the UHF, VHF, and S-Band range. As a part of the ham radio community ham radio operators should be treated as part of the network. Ham radio operators hold all required knowledge about the technology required to operate radio equipment and are also well distributed world-wide. To also make the system attractive to volunteers, hardware should be designed in a way that allows manufacturing and distribution on a budget. All designs should also be made public to allow own and improved builds of the system by the community. The hardware should be designed to be simple to use correctly and hard to be used wrong.

    [1] Constellation Plattform, [2] shackspace Stuttgart, References [1] IRS Kleinsatelliten, Universität Stuttgart, [2] Constellation Plattform, [3] BOINC, Berkely University, [4] Quake Catcher Network, [5] DGLR Bezirksgruppe Stuttgart, [6] e.V., [7] Selfnet e.V.,

Scientists say they’re getting closer to Matrix-style instant learning–but-is-it-safe

What price effortless learning? In a paper published in the latest issue of Science, neuroscientists say they’ve developed a novel method of learning, that can cause long-lasting improvement in tasks that demand a high level of visual performance. And while the so-called neurofeedback method could one day be used to teach you kung fu, or to aid spinal-injury patients on the road to rehabilitation, evidence also suggests the technology could be used to target people without their knowledge, opening doors to numerous important ethical questions. According to a press release from the National Science Foundation:

New research published today in the journal Science suggests it may be possible to use brain technology to learn to play a piano, reduce mental stress or hit a curve ball with little or no conscious effort. It’s the kind of thing seen in Hollywood’s “Matrix” franchise.

Experiments conducted at Boston University (BU) and ATR Computational Neuroscience Laboratories in Kyoto, Japan, recently demonstrated that through a person’s visual cortex, researchers could use decoded functional magnetic resonance imaging (fMRI) to induce brain activity patterns to match a previously known target state and thereby improve performance on visual tasks.

Think of a person watching a computer screen and having his or her brain patterns modified to match those of a high-performing athlete or modified to recuperate from an accident or disease. Though preliminary, researchers say such possibilities may exist in the future.

But here’s the bit that’s really interesting (and also pretty creepy): the researchers found that this novel learning approach worked even when test subjects weren’t aware of what they were learning:

“The most surprising thing in this study is that mere inductions of neural activation patterns…led to visual performance improvement…without presenting the feature or subjects’ awareness of what was to be learned,” said lead researcher Takeo Watanabe. He continues:

We found that subjects were not aware of what was to be learned while behavioral data obtained before and after the neurofeedback training showed that subjects’ visual performance improved specifically for the target orientation, which was used in the neurofeedback training.

Is this research mind-blowing and exciting? Absolutely. I mean come on — automated learning? Yes. Sign me up. But according to research co-author Mitsuo Kawato, the neurofeedback mechanism could just as soon be used for purposes of hypnosis or covert mind control. And that… I’m not so keen on. “We have to be careful,” he explains, “so that this method is not used in an unethical way.”

New research suggests it may be possible to learn high-performance tasks with little or no conscious effort / December 8, 2011

New research published today in the journal Science suggests it may be possible to use brain technology to learn to play a piano, reduce mental stress or hit a curve ball with little or no conscious effort. It’s the kind of thing seen in Hollywood’s “Matrix” franchise. Experiments conducted at Boston University (BU) and ATR Computational Neuroscience Laboratories in Kyoto, Japan, recently demonstrated that through a person’s visual cortex, researchers could use decoded functional magnetic resonance imaging (fMRI) to induce brain activity patterns to match a previously known target state and thereby improve performance on visual tasks.

Think of a person watching a computer screen and having his or her brain patterns modified to match those of a high-performing athlete or modified to recuperate from an accident or disease. Though preliminary, researchers say such possibilities may exist in the future. “Adult early visual areas are sufficiently plastic to cause visual perceptual learning,” said lead author and BU neuroscientist Takeo Watanabe of the part of the brain analyzed in the study. Neuroscientists have found that pictures gradually build up inside a person’s brain, appearing first as lines, edges, shapes, colors and motion in early visual areas. The brain then fills in greater detail to make a red ball appear as a red ball, for example. Researchers studied the early visual areas for their ability to cause improvements in visual performance and learning. “Some previous research confirmed a correlation between improving visual performance and changes in early visual areas, while other researchers found correlations in higher visual and decision areas,” said Watanabe, director of BU’s Visual Science Laboratory. “However, none of these studies directly addressed the question of whether early visual areas are sufficiently plastic to cause visual perceptual learning.” Until now.

Boston University post-doctoral fellow Kazuhisa Shibata designed and implemented a method using decoded fMRI neurofeedback to induce a particular activation pattern in targeted early visual areas that corresponded to a pattern evoked by a specific visual feature in a brain region of interest. The researchers then tested whether repetitions of the activation pattern caused visual performance improvement on that visual feature. The result, say researchers, is a novel learning approach sufficient to cause long-lasting improvement in tasks that require visual performance. What’s more, the approached worked even when test subjects were not aware of what they were learning.

“The most surprising thing in this study is that mere inductions of neural activation patterns corresponding to a specific visual feature led to visual performance improvement on the visual feature, without presenting the feature or subjects’ awareness of what was to be learned,” said Watanabe, who developed the idea for the research project along with Mitsuo Kawato, director of ATR lab and Yuka Sasaki, an assistant in neuroscience at Massachusetts General Hospital. “We found that subjects were not aware of what was to be learned while behavioral data obtained before and after the neurofeedback training showed that subjects’ visual performance improved specifically for the target orientation, which was used in the neurofeedback training,” he said.

The finding brings up an inevitable question. Is hypnosis or a type of automated learning a potential outcome of the research? “In theory, hypnosis or a type of automated learning is a potential outcome,” said Kawato. “However, in this study we confirmed the validity of our method only in visual perceptual learning. So we have to test if the method works in other types of learning in the future. At the same time, we have to be careful so that this method is not used in an unethical way.”

Takeo Watanabe
email: takeo [at} bu [dot] edu


“It is controversial whether the adult primate early visual cortex is sufficiently plastic to cause visual perceptual learning (VPL). The controversy occurs partially because most VPL studies have examined correlations between behavioral and neural activity changes rather than cause-and-effect relationships. With an online-feedback method that uses decoded functional magnetic resonance imaging (fMRI) signals, we induced activity patterns only in early visual cortex corresponding to an orientation without stimulus presentation or participants’ awareness of what was to be learned. The induced activation caused VPL specific to the orientation. These results suggest that early visual areas are so plastic that mere inductions of activity patterns are sufficient to cause VPL. This technique can induce plasticity in a highly selective manner, potentially leading to powerful training and rehabilitative protocols.”

Bacteria, Salt Water Make Hydrogen Fuel
by Jesse Emspak / Sep 21, 2011

The ‘hydrogen economy’ requires a lot of things, but first is an easy and cheap supply of hydrogen. There are lots of ways to make it, but most of them don’t produce large quantities quickly or inexpensively.  Professor Bruce Logan, director of the Hydrogen to Energy Center at Penn State University, has found a way to change that. He used a process called reverse electrodialysis, combined with some ordinary bacteria to get hydrogen out of water by breaking up its molecules. Water — which is made of two atoms of hydrogen and one of oxygen — can be broken down with electricity. (This is a pretty common high school science experiment). The problem is that you need to pump a lot of energy into the water to break the molecules apart.

Logan thought there had to be a better way. He combined two methods of making electricity — one from microbial fuel cell research and the other from reverse electrodialysis. In a microbial fuel cell, bacteria eat organic molecules and during digestion, release electrons. In a reverse electrodialysis setup, a chamber is separated by a stack of membranes that allow charged particles, or ions, to move in only one direction. Filling the chamber with salt water on one side and fresher water on the other causes ions to try and move to the fresher side. That movement creates a voltage. Adding more membranes increases the voltage, but at a certain point it becomes unwieldy. By putting the bacteria in the side of the reverse electrodialysis chamber with the fresh water, and using only 11 membranes, Logan was able to generate enough voltage to generate hydrogen. Ordinarily he would need to generate about 0.414 volts. With this system, he can get .8 volts, nearly double. (The microbial part of the cell generates 0.3 volts and the RED system creates about 0.5.)

Using seawater, some less salty wastewater with sewage or other organic matter in it and the bacteria, Logan’s apparatus can produce about 1.6 cubic meters of hydrogen for every cubic meter of liquid through the system of chambers and membranes. Another bonus is that less energy goes into pumping the water — if anything, flow rates and pressure have to be kept relatively low so as not to damage the membranes.  Making hydrogen cheaper is a necessity if hydrogen cars are to be a reality. Some car companies already make hydrogen-powered models. The state of Hawaii is already experimenting with hydrogen fuel systems. Producing cheaper, abundant hydrogen — especially from sewer water and seawater — is a big step in that direction.

Harvesting ‘limitless’ hydrogen from self-powered cells
by Mark Kinver / 20 September 2011

US researchers say they have demonstrated how cells fueled by bacteria can be “self-powered” and produce a limitless supply of hydrogen. Until now, they explained, an external source of electricity was required in order to power the process. However, the team added, the current cost of operating the new technology is too high to be used commercially. Details of the findings have been published in the Proceedings of the National Academy of Sciences.

“There are bacteria that occur naturally in the environment that are able to release electrons outside of the cell, so they can actually produce electricity as they are breaking down organic matter,” explained co-author Bruce Logan, from Pennsylvania State University, US. “We use those microbes, particularly inside something called a microbial fuel cell (MFC), to generate electrical power. “We can also use them in this device, where they need a little extra power to make hydrogen gas. “What that means is that they produce this electrical current, which are electrons, they release protons in the water and these combine with electrons.”

Prof Logan said that the technology to utilize this process to produce hydrogen was called microbial electrolysis cell (MEC). “The breakthrough here is that we do not need to use an electrical power source anymore to provide a little energy into the system. “All we need to do is add some fresh water and some salt water and some membranes, and the electrical potential that is there can provide that power.” The MECs use something called “reverse electrodialysis” (RED), which refers to the energy gathered from the difference in salinity, or salt content, between saltwater and freshwater.

In their paper, Prof Logan and colleague Younggy Kim explained how an envisioned RED system would use alternating stacks of membranes that harvest this energy; the movement of charged atoms move from the saltwater to freshwater creates a small voltage that can be put to work. “This is the crucial element of the latest research,” Prof Logan told BBC News, explaining the process of their system, known as a microbial reverse-electrodialysis electrolysis cell (MREC). “If you think about desalinating water, it takes energy. If you have a freshwater and saltwater interface, that can add energy. We realized that just a little bit of that energy could make this process go on its own.”

Artistic representation of hydrogen molecules (Image: Science Photo Library)

He said that the technology was still in its infancy, which was one of the reasons why it was not being exploited commercially. “Right now, it is such a new technology,” he explained. “In a way it is a little like solar power. We know we can convert solar energy into electricity but it has taken many years to lower the cost. “This is a similar thing: it is a new technology and it could be used, but right now it is probably a little expensive. So the question is, can we bring down the cost?” The next step, Prof Logan explained, was to develop larger-scale cells: “Then it will easier to evaluate the costs and investment needed to use the technology. The authors acknowledged that hydrogen had “significant potential as an efficient energy carrier”, but it had been dogged with high production costs and environmental concerns, because it is most often produced using fossil fuels.

Prof Logan observed: “We use hydrogen for many, many things. It is used in making [petrol], it is used in foods etc. Whether we use it in transportation… remains to be seen.” But, the authors wrote that their findings offered hope for the future: “This unique type of integrated system has significant potential to treat wastewater and simultaneously produce [hydrogen] gas without any consumption of electrical grid energy.” Prof Logan added that a working example of a microbial fuel cell was currently on display at London’s Science Museum, as part of the Water Wars exhibition.

Bacterial hydrolysis cell with reverse electrodialysis stack

‘Inexhaustible’ source of hydrogen may be unlocked by salt water / September 19, 2011

A grain of salt or two may be all that microbial electrolysis cells need to produce hydrogen from wastewater or organic byproducts, without adding carbon dioxide to the atmosphere or using grid electricity, according to Penn State engineers. “This system could produce hydrogen anyplace that there is wastewater near sea water,” said Bruce E. Logan, Kappe Professor of Environmental Engineering. “It uses no grid electricity and is completely carbon neutral. It is an inexhaustible source of energy.” Microbial electrolysis cells that produce hydrogen are the basis of this recent work, but previously, to produce hydrogen, the fuel cells required some electrical input. Now, Logan, working with postdoctoral fellow Younggy Kim, is using the difference between river water and seawater to add the extra energy needed to produce hydrogen. Their results, published in the Sept. 19 issue of the Proceedings of the National Academy of Sciences, “show that pure hydrogen gas can efficiently be produced from virtually limitless supplies of seawater and river water and biodegradable organic matter.”

Logan’s cells were between 58 and 64 percent efficient and produced between 0.8 to 1.6 cubic meters of hydrogen for every cubic meter of liquid through the cell each day. The researchers estimated that only about 1 percent of the energy produced in the cell was needed to pump water through the system. The key to these microbial electrolysis cells is reverse-electrodialysis or RED that extracts energy from the ionic differences between salt water and fresh water. A RED stack consists of alternating ion exchange membranes — positive and negative — with each RED contributing additively to the electrical output. “People have proposed making electricity out of RED stacks,” said Logan. “But you need so many membrane pairs and are trying to drive an unfavorable reaction.” For RED technology to hydrolyze water — split it into hydrogen and oxygen — requires 1.8 volts, which would in practice require about 25 pairs of membranes and increase pumping resistance. However, combining RED technology with exoelectrogenic bacteria — bacteria that consume organic material and produce an electric current — reduced the number of RED stacks to five membrane pairs.

Previous work with microbial electrolysis cells showed that they could, by themselves, produce about 0.3 volts of electricity, but not the 0.414 volts needed to generate hydrogen in these fuel cells. Adding less than 0.2 volts of outside electricity released the hydrogen. Now, by incorporating 11 membranes — five membrane pairs that produce about 0.5 volts — the cells produce hydrogen. “The added voltage that we need is a lot less than the 1.8 volts necessary to hydrolyze water,” said Logan. “Biodegradable liquids and cellulose waste are abundant and with no energy in and hydrogen out we can get rid of wastewater and by-products. This could be an inexhaustible source of energy.” Logan and Kim’s research used platinum as a catalyst on the cathode, but subsequent experimentation showed that a non-precious metal catalyst, molybdenum sulfide, had 51 percent energy efficiency.

Bruce Logan
email : blogan [at] psu [dot] edu

Batteries That Run On (And Clean) Used Toilet Water
by Ariel Schwartz / Aug 22, 2011

Humans should have a little more respect for dirty toilet water. In recent years, wastewater has become something of a commodity, with nuclear plants paying for treated wastewater to run their facilities, cities relying on so-called “toilet to tap” technology, and breweries turning wastewater into biogas that can be used to power their facilities. Soon enough, wastewater-powered batteries may even keep the lights on in your house or, at the very least, in the industrial plants that clean the wastewater.

Environmental engineer Bruce Logan is developing microbial fuel cells that rely on wastewater bacteria’s desire to munch on organic waste. When these bacteria eat the waste, electrons are released as a byproduct–and Logan’s fuel cell collects those electrons on carbon bristles, where they can move through a circuit and power everything from light bulbs to ceiling fans. Logan’s microbial fuel cells can produce both electrical power and hydrogen, meaning the cells could one day be used to juice up hydrogen-powered vehicles.

Logan’s fuel cells aren’t overly expensive. “In the early reactors, we used very expensive graphite rods and expensive polymers and precious metals like platinum. And we’ve now reached the point where we don’t have to use any precious metals,” he explained to the National Science Foundation. Microbial fuel cells still don’t produce enough power to be useful in our daily lives, but that may change soon–Logan estimates that the fuel cells will be ready to go in the next five to 10 years, at which point they could power entire wastewater treatment plants and still generate enough electricity to power neighboring towns. There may also be ones that use–and in the process-desalinate–salt water, using just the energy from the bacteria. And if the microbial fuel cells don’t work out, there’s another option: Chinese researchers have developed a photocatalytic fuel cell that uses light (as opposed to microbial cells) to clean wastewater and generate power. That technology is also far from commercialization, but in a few years, filthy water will power its own cleaning facilities one way or another.

by Duncan Geere / 13 July 11

To a computer, words and sentences appear like data. But AI researchers want to teach computers how to actually understand the meaning of a sentence and learn from it. One of the best ways to test the capability of an AI to do that is to see whether it can understand and follow a set of instructions for a task that it’s unfamiliar with. Regina Barzilay, a professor of computer science and electrical engineering at MIT’s computer science and AI lab, has attempted to do just that — teaching a computer to play Sid Meier’s Civilization. In Civilization, the player is asked to guide a nation from the earliest periods of history through to the present day and into the future. It’s complex, and each action doesn’t necessarily have a predetermined outcome, because the game can react randomly to what you do. Barzilay found that putting a machine-learning system to work on Civ gave it a victory rate of 46 percent, but that when the system was able to use the manual for the game to guide the development of its strategy, it rose dramatically to 79 percent.

It works by word association. Starting completely from scratch, the computer behaves randomly. As it acts, however, it can read words that pop up on the screen, and then search for those words in the manual. As it finds them, it can scan the surrounding text to develop ideas about what action that word corresponds with. Ideas that work well are kept, and those that lead to bad results are discarded. “If you’d asked me beforehand if I thought we could do this yet, I’d have said no,” says Eugene Charniak, University Professor of Computer Science at Brown University. “You are building something where you have very little information about the domain, but you get clues from the domain itself.” The eventual goal is both to develop AIs that can extract useful information from manuals written for humans, allowing them to approach a problem armed with just the instructions, rather than having to be painstakingly taught how to deal with any eventuality. Barzilay has already begun to adapt these systems to work with robots.

“Civilization” is a strategy game in which players build empires by, among other things, deciding where to found cities and deploy armies.

Computers learn language (and world domination) by reading the manual
by Darren Quick / July 13, 2011

Researchers at MIT’s Computer Science and Artificial Intelligence Lab have been able to create computers that learn language by doing something that many people consider a last resort when tackling an unfamiliar task – reading the manual (or RTBM). Beginning with virtually no prior knowledge, one machine-learning system was able to infer the meanings of words by reviewing instructions posted on Microsoft’s website detailing how to install a piece of software on a Windows PC, while another was able to learn how to play Sid Meier’s empire-building Civilization II strategy computer game by reading the gameplay manual.

Without so much as an idea of the task they were intended to perform or the language in which the instructions were written, the two similar systems were initially provided only with a list of possible actions they could take, such as moving the cursor or performing right or left clicks. They also had access to the information displayed on the screen and were able to gauge their success, be it successfully installing the software or winning the game. But they didn’t know what actions corresponded to what words in the instructions, or what the objects in the game world represent. Predictably, this means that initially the behavior of the system is pretty random, but as it performs various actions and words appear on the screen it looks for instances of that word in the instruction set as well as searching the surrounding text for associated words. In this way it is able to make assumptions about what actions the words correspond to and assumptions that consistently lead to good results are given greater credence, while those that consistently lead to bad results are abandoned.

Using this method, the system attempting to install software was able to reproduce 80 percent of the steps that a person reading the same instructions would carry out. Meanwhile, the system playing Civilization II ended up winning 79 percent of the games it played, compared to a winning rate of 46 percent for a version of the system that didn’t rely on the written instructions. What makes the results even more impressive for the Civilization II-playing system is that the manual only provided instructions on how to play the game. “They don’t tell you how to win. They just give you very general advice and suggestions, and you have to figure out a lot of other things on your own,” said Regina Barzilay, associate professor of computer science and electrical engineering, who took the best-paper award at the annual meeting of the Association for Computational Linguistics (ACL) in 2009 for the software installing system. “Games are used as a test bed for artificial-intelligence techniques simply because of their complexity,” says graduate student S. R. K. Branavan, who along David Silver of University College London applied a similar approach to Barzilay in developing the system that learned to play Civilization II. “Every action that you take in the game doesn’t have a predetermined outcome, because the game or the opponent can randomly react to what you do. So you need a technique that can handle very complex scenarios that react in potentially random ways,” Branavan said.

Although the main purpose of the project was to demonstrate that computer systems that learn the meanings of words through exploratory interaction with their environments is a promising area for future research, Barzilay and Branavan say that such systems could also have more near-term applications. Most computer games that let a player play against the computer require programmers to develop strategies for the computer to follow and write algorithms that execute them. Systems like those developed at MIT could be used to automatically create algorithms that perform better than the human-designed ones. With such machine-learning systems also having applications in the field of robotics, and Barzilay and her students at MIT have begun to adapt their meaning-inferring algorithms to this purpose. Let’s just hope they don’t take the lessons learned playing Civilization II and try for the world domination win in the real world.

Screen shot of Sid Meier's strategy computer game, Civilization II
Screen shot of Sid Meier’s strategy computer game, Civilization II

Computer learns language by playing games
By basing its strategies on the text of a manual, a computer infers the meanings of words without human supervision.
by Larry Hardesty, MIT / July 12 2011

Computers are great at treating words as data: Word-processing programs let you rearrange and format text however you like, and search engines can quickly find a word anywhere on the Web. But what would it mean for a computer to actually understand the meaning of a sentence written in ordinary English — or French, or Urdu, or Mandarin?

One test might be whether the computer could analyze and follow a set of instructions for an unfamiliar task. And indeed, in the last few years, researchers at MIT’s Computer Science and Artificial Intelligence Lab have begun designing machine-learning systems that do exactly that, with surprisingly good results. In 2009, at the annual meeting of the Association for Computational Linguistics (ACL), researchers in the lab of Regina Barzilay, associate professor of computer science and electrical engineering, took the best-paper award for a system that generated scripts for installing a piece of software on a Windows computer by reviewing instructions posted on Microsoft’s help site. At this year’s ACL meeting, Barzilay, her graduate student S. R. K. Branavan and David Silver of University College London applied a similar approach to a more complicated problem: learning to play “Civilization,” a computer game in which the player guides the development of a city into an empire across centuries of human history. When the researchers augmented a machine-learning system so that it could use a player’s manual to guide the development of a game-playing strategy, its rate of victory jumped from 46 percent to 79 percent.

Starting from scratch
“Games are used as a test bed for artificial-intelligence techniques simply because of their complexity,” says Branavan, who was first author on both ACL papers. “Every action that you take in the game doesn’t have a predetermined outcome, because the game or the opponent can randomly react to what you do. So you need a technique that can handle very complex scenarios that react in potentially random ways.” Moreover, Barzilay says, game manuals have “very open text. They don’t tell you how to win. They just give you very general advice and suggestions, and you have to figure out a lot of other things on your own.” Relative to an application like the software-installing program, Branavan explains, games are “another step closer to the real world.” The extraordinary thing about Barzilay and Branavan’s system is that it begins with virtually no prior knowledge about the task it’s intended to perform or the language in which the instructions are written. It has a list of actions it can take, like right-clicks or left-clicks, or moving the cursor; it has access to the information displayed on-screen; and it has some way of gauging its success, like whether the software has been installed or whether it wins the game. But it doesn’t know what actions correspond to what words in the instruction set, and it doesn’t know what the objects in the game world represent. So initially, its behavior is almost totally random. But as it takes various actions, different words appear on screen, and it can look for instances of those words in the instruction set. It can also search the surrounding text for associated words, and develop hypotheses about what actions those words correspond to. Hypotheses that consistently lead to good results are given greater credence, while those that consistently lead to bad results are discarded.

Proof of concept
In the case of software installation, the system was able to reproduce 80 percent of the steps that a human reading the same instructions would execute. In the case of the computer game, it won 79 percent of the games it played, while a version that didn’t rely on the written instructions won only 46 percent. The researchers also tested a more-sophisticated machine-learning algorithm that eschewed textual input but used additional techniques to improve its performance. Even that algorithm won only 62 percent of its games. “If you’d asked me beforehand if I thought we could do this yet, I’d have said no,” says Eugene Charniak, University Professor of Computer Science at Brown University. “You are building something where you have very little information about the domain, but you get clues from the domain itself.” Charniak points out that when the MIT researchers presented their work at the ACL meeting, some members of the audience argued that more sophisticated machine-learning systems would have performed better than the ones to which the researchers compared their system. But, Charniak adds, “it’s not completely clear to me that that’s really relevant. Who cares? The important point is that this was able to extract useful information from the manual, and that’s what we care about.” Most computer games as complex as “Civilization” include algorithms that allow players to play against the computer, rather than against other people; the games’ programmers have to develop the strategies for the computer to follow and write the code that executes them. Barzilay and Branavan say that, in the near term, their system could make that job much easier, automatically creating algorithms that perform better than the hand-designed ones. But the main purpose of the project, which was supported by the National Science Foundation, was to demonstrate that computer systems that learn the meanings of words through exploratory interaction with their environments are a promising subject for further research. And indeed, Barzilay and her students have begun to adapt their meaning-inferring algorithms to work with robotic systems.

Regina Barzilay
email : regina [at] [dot] edu

S.R.K. Branavan
email : branavan [at] [dot] edu

An incidental challenge in building a computer system that could decipher Ugaritic (inscribed on tablet) was developing a way to digitally render Ugaritic symbols (inset).

Computer automatically deciphers ancient language
A new system that took a couple hours to decipher much of the ancient language Ugaritic could help improve online translation software.
by Larry Hardesty, MIT /  June 30 2010

In his 2002 book Lost Languages, Andrew Robinson, then the literary editor of the London Times’ higher-education supplement, declared that “successful archaeological decipherment has turned out to require a synthesis of logic and intuition … that computers do not (and presumably cannot) possess.” Regina Barzilay, an associate professor in MIT’s Computer Science and Artificial Intelligence Lab, Ben Snyder, a grad student in her lab, and the University of Southern California’s Kevin Knight took that claim personally. At the Annual Meeting of the Association for Computational Linguistics in Sweden next month, they will present a paper on a new computer system that, in a matter of hours, deciphered much of the ancient Semitic language Ugaritic. In addition to helping archeologists decipher the eight or so ancient languages that have so far resisted their efforts, the work could also help expand the number of languages that automated translation systems like Google Translate can handle.

To duplicate the “intuition” that Robinson believed would elude computers, the researchers’ software makes several assumptions. The first is that the language being deciphered is closely related to some other language: In the case of Ugaritic, the researchers chose Hebrew. The next is that there’s a systematic way to map the alphabet of one language on to the alphabet of the other, and that correlated symbols will occur with similar frequencies in the two languages. The system makes a similar assumption at the level of the word: The languages should have at least some cognates, or words with shared roots, like main and mano in French and Spanish, or homme and hombre. And finally, the system assumes a similar mapping for parts of words. A word like “overloading,” for instance, has both a prefix — “over” — and a suffix — “ing.” The system would anticipate that other words in the language will feature the prefix “over” or the suffix “ing” or both, and that a cognate of “overloading” in another language — say, “surchargeant” in French — would have a similar three-part structure.

The system plays these different levels of correspondence off of each other. It might begin, for instance, with a few competing hypotheses for alphabetical mappings, based entirely on symbol frequency — mapping symbols that occur frequently in one language onto those that occur frequently in the other. Using a type of probabilistic modeling common in artificial-intelligence research, it would then determine which of those mappings seems to have identified a set of consistent suffixes and prefixes. On that basis, it could look for correspondences at the level of the word, and those, in turn, could help it refine its alphabetical mapping. “We iterate through the data hundreds of times, thousands of times,” says Snyder, “and each time, our guesses have higher probability, because we’re actually coming closer to a solution where we get more consistency.” Finally, the system arrives at a point where altering its mappings no longer improves consistency.

Ugaritic has already been deciphered: Otherwise, the researchers would have had no way to gauge their system’s performance. The Ugaritic alphabet has 30 letters, and the system correctly mapped 29 of them to their Hebrew counterparts. Roughly one-third of the words in Ugaritic have Hebrew cognates, and of those, the system correctly identified 60 percent. “Of those that are incorrect, often they’re incorrect only by a single letter, so they’re often very good guesses,” Snyder says. Furthermore, he points out, the system doesn’t currently use any contextual information to resolve ambiguities. For instance, the Ugaritic words for “house” and “daughter” are spelled the same way, but their Hebrew counterparts are not. While the system might occasionally get them mixed up, a human decipherer could easily tell from context which was intended.

Nonetheless, Andrew Robinson remains skeptical. “If the authors believe that their approach will eventually lead to the computerised ‘automatic’ decipherment of currently undeciphered scripts,” he writes in an e-mail, “then I am afraid I am not at all persuaded by their paper.” The researchers’ approach, he says, presupposes that the language to be deciphered has an alphabet that can be mapped onto the alphabet of a known language — “which is almost certainly not the case with any of the important remaining undeciphered scripts,” Robinson writes. It also assumes, he argues, that it’s clear where one character or word ends and another begins, which is not true of many deciphered and undeciphered scripts. “Each language has its own challenges,” Barzilay agrees. “Most likely, a successful decipherment would require one to adjust the method for the peculiarities of a language.” But, she points out, the decipherment of Ugaritic took years and relied on some happy coincidences — such as the discovery of an axe that had the word “axe” written on it in Ugaritic. “The output of our system would have made the process orders of magnitude shorter,” she says. Indeed, Snyder and Barzilay don’t suppose that a system like the one they designed with Knight would ever replace human decipherers. “But it is a powerful tool that can aid the human decipherment process,” Barzilay says. Moreover, a variation of it could also help expand the versatility of translation software. Many online translators rely on the analysis of parallel texts to determine word correspondences: They might, for instance, go through the collected works of Voltaire, Balzac, Proust and a host of other writers, in both English and French, looking for consistent mappings between words. “That’s the way statistical translation systems have worked for the last 25 years,” Knight says. But not all languages have such exhaustively translated literatures: At present, Snyder points out, Google Translate works for only 57 languages. The techniques used in the decipherment system could be adapted to help build lexicons for thousands of other languages. “The technology is very similar,” says Knight, who works on machine translation. “They feed off each other.

Next Step in 3D Printing: Your Kidneys
by Anya Kamenetz / Mar 3, 2011

Dr. Anthony Atala, a regenerative medicine specialist at Wake Forest University, is pioneering the use of printing techniques to reconstruct and repair human flesh and organs. The basis is a combination of cultured human cells and scaffolding built or woven from organic material. In one staggering setup, a patient lies on a table and a flatbed scanner literally scans her wound, followed by a printer that adds just the right types of tissues back on at the right depth. “You can print right on the patient,” Dr. Atala told the TED audience on Thursday. “I know it sounds funny, but it’s true.” The next evolving step is the use of 3-D printers, which I wrote about on Tuesday, to rebuild human organs. Ninety percent of patients on the organ donation list are waiting for kidneys, a fist-size organ with a profusion of tiny blood vessels. To build a customized kidney, first you scan the patient with a CT scanner, then use 3D imaging techniques to create a computerized form that the printer can read, and finally build the organ layer by layer. Printing a new kidney takes about six hours, and it lasts for a lifetime–a young man came out on stage who had the surgery in the early days, 10 years ago.

Surgeon Creates New Kidney Onstage / March 4, 2011

“It’s like baking a cake,” Anthony Atala of the Wake Forest Institute of Regenerative Medicine said as he cooked up a fresh kidney on stage at a TED Conference in the California city of Long Beach. Scanners are used to take a 3-D image of a kidney that needs replacing, then a tissue sample about half the size of postage stamp is used to seed the computerized process, Atala explained. The organ “printer” then works layer-by-layer to build a replacement kidney replicating the patient’s tissue. College student Luke Massella was among the first people to receive a printed kidney during experimental research a decade ago when he was just 10 years old. He said he was born with Spina Bifida and his kidneys were not working. “Now, I’m in college and basically trying to live life like a normal kid,” said Massella, who was reunited with Atala at TED “This surgery saved my life and made me who I am today.” About 90 percent of people waiting for transplants are in need of kidneys, and the need far outweighs the supply of donated organs, according to Atala. “There is a major health crisis today in terms of the shortage of organs,” Atala said. “Medicine has done a much better job of making us live longer, and as we age our organs don’t last.”

Anthony Atala

New Device Prints Human Tissue
by Bill Christensen / 29 December 2009

Invetech has delivered what it calls the “world`s first production model 3D bio-printer” to Organovo, developers of the proprietary NovoGen bioprinting technology. Organovo will in turn supply the devices to institutions investigating human tissue repair and organ replacement. Keith Murphy, CEO of Organovo, based in San Diego, said the units represent a breakthrough because they provide for the first time a flexible technology platform for organizations working on many different types of tissue construction and organ replacement. “Scientists and engineers can use the 3D bio printers to enable placing cells of almost any type into a desired pattern in 3D,” Murphy said. “Researchers can place liver cells on a preformed scaffold, support kidney cells with a co-printed scaffold, or form adjacent layers of epithelial and stromal soft tissue that grow into a mature tooth. Ultimately the idea would be for surgeons to have tissue on demand for various uses, and the best way to do that is get a number of bio-printers into the hands of researchers and give them the ability to make three dimensional tissues on demand.”

The 3D bio-printers include an intuitive software interface that allows engineers to build a model of the tissue construct before the printer commences the physical constructions of the organs cell-by-cell using automated, laser-calibrated print heads. “Building human organs cell-by-cell was considered science fiction not that long ago,” said Fred Davis, president of Invetech, which has offices in San Diego and Melbourne. “Through this clever combination of technology and science we have helped Organovo develop an instrument that will improve people’s lives, making the regenerative medicine that Organovo provides accessible to people around the world.” Science fiction, indeed. Artificial organs have been a science fiction staple since writer Philip K. Dick wrote about artiforgs (artificial organs) in his 1964 novel Cantata 140 and Larry Niven’sartificially grown organs in his 1968 novel A Gift From Earth.

Behind the Scenes of Bioprinting
by By Dave Bullock / July 11, 2010

Say goodbye to donor lists and organ shortages. A biotech firm has created a printer that prints veins using a patients’ own cells. The device could potentially create whole organs in the future. “Right now we’re really good at printing blood vessels,” says Ben Shepherd, senior research scientist at regenerative-medicine company Organovo. “We printed 10 this week. We’re still learning how to best condition them to be good, strong blood vessels.” Most organs in the body are filled with veins, so the ability to print vascular tissue is a critical building block for complete organs. The printed veins are about to start testing in animal trials, and eventually go through human clinical trials. If all goes well, in a few years you may be able to replace a vein that has deteriorated (due to frequent injections of chemo treatment, for example) with custom-printed tissue grown from your own cells. The barriers to full-organ printing are not just technological. The first organ-printing machine will cost hundreds of millions of dollars to develop, test, produce and market. Not to mention the difficulty any company will have getting FDA approval. “If Organovo will be able to raise enough money this company has [the] potential to succeed as [the] first bioprinting company but only time will show,” says Dr. Vladimir Mironov, director of advanced tissue biofabrication at the Medical University of South Carolina. Organovo walked through the process it uses to print blood vessels on the custom bioprinter.

Shepherd places a bioreactor inside an incubator where it will be pumped with a growth medium for a few days. The bioreactor uses a special mixture of chemicals that are similar to what cells would see when they grow inside the body, which will help the cells become strong vascular tissue.

Stem Cells
Senior research scientist Ben Shepherd removes stem cells from a bath of liquid nitrogen. The cells will be cultured to greatly increase their number before being loaded into the printer. Eventually these cells could be taken from a variety of places in a patient’s body –- fat, bone marrow and skin cells –- and made into a working vein.

After the cells are defrosted they are cultured in a growth medium (above). This allows the cells to multiply and grow so they can be used to form veins. The medium also uses special chemicals to tell the stem cells to grow into the cell type required, in this case blood-vessel cells. Once a enough cells are produced, they are separated from the growth medium using a centrifuge (below) and compressed into pellets.

photos: Dave Bullock/

Hydrogel Scaffolding
The first step of the printing process is to lay down a material called hydrogel, which is used as a temporary scaffolding to support the vein tissue. The custom-made printer uses two pump heads that squirt out either the scaffolding structure or the cells into a petri dish. The pump heads are mounted on a precision robotic assembly for microscopic accuracy. The head on the right is dipping into the container of hydorogel in the photo above.

A chamber called a bioreactor is used to stimulate the vein. It’s prepared before the vein is printed. The bioreactor is a fairly standard piece of biotech machinery. It is machined out of a block of aluminum that surrounds a plastic container with various ports. These ports are used to pump in chemicals that will feed the growing vein.

Before printing the veins, tubes of the cultured cells are loaded into the print head manually, like a biomass print cartridge.

Hydrogel Mold for Blood Vessels
Lines of the hydrogel are laid down in parallel in a trough shape on the petri dish. Then cylinders of cell pellets are printed into the trough. One more cylinder of hydrogel is printed into the middle of the cells, which serves to create the hole inside the vein where blood will eventually flow (below).

Illustration courtesy Organovo

Growing Into Veins
The printed veins are then left in a different growth medium for several weeks. The cells soon release from the hydrogel, and a hollow tube of vascular cells is left behind.

Happy Veins
The printed cells in tubular form are then placed into the bioreactor. The bioreactor (above) pumps a special cocktail of proteins, buffers and various other chemicals (below) through the printed vein. This conditions the cells to be good, strong veins and keep them happy.

Finished Product
After their stay in the bioreactor, the pellets of cells grow together to form veins which can then be implanted in the patient. Because the veins are grown from the patient’s own cells, their body is more likely to accept the implanted vein.