Photo taken by Camille Flammarion in 1902 of lightning striking the Eiffel Tower on a summer night.

Scientists work to harness lightning for electricity
by Candace Lombardi / August 26, 2010

Nikola Tesla would be jealous. A group of chemists from the University of Campinas in Brazil presented research on Wednesday claiming they’ve figured out how electricity is formed and released in the atmosphere. Based on this knowledge, the team said it believes a device could be developed for extracting electrical charges from the atmosphere and using it for electricity. The team, led by Fernando Galembeck, says they discovered the process by simulating water vapor reactions in a laboratory with dust particles common to the atmosphere.

They found that silica becomes more negatively charged when high levels of water vapor are present in the air, in other words during high humidity. They also found that aluminum phosphate becomes more positively charged in high humidity. “This was clear evidence that water in the atmosphere can accumulate electrical charges and transfer them to other materials it comes into contact with. We are calling this ‘hygroelectricity,’ meaning ‘humidity electricity,'” Galembeck said in a statement. But the discovery, if true, goes against the commonly held theory among scientists such as the International Union of Pure and Applied Chemistry, that water is electroneutral–that it cannot store a charge. Galembeck, who is a member of the IUPAC, told New Scientist that he does not dispute the principle of electroneutrality in theory, but that he believes real-life substances like water have ion imbalances that can allow it to produce a charge.

The hygroelectricity discovery could lead to the invention of a device that is able to tap into all that energy. Akin to a solar panel, a hygroelectrical panel on a roof would capture atmospheric electricity that could then be transferred for a building’s energy use, according to the University of Capinas team. In addition to capturing electricity, such a device could also be used to drain the area around a building of its electrical charge, preventing the atmospheric discharge of electricity during storms–aka lightning. “We certainly have a long way to go. But the benefits in the long range of harnessing hygroelectricity could be substantial,” Galembeck said. The research was presented in Boston at the 240th National Meeting of the American Chemical Society.

Harness lightning for energy, thanks to high humidity?
by David Biello / Aug 26, 2010

Why do the roiling, black clouds of a thunderstorm produce lightning? Ben Franklin and others helped prove that such lightning was discharged electricity, but what generates that electricity in such prodigious quantities? After all, storms generate millions of lightning bolts around the globe every year—even volcanoes can get in on the act as the recent eruption of Eyjafjallajökull did when photographs captured bolts of blue in the ash cloud.

Perhaps surprisingly, scientists still debate how exactly lightning forms; theories range from colliding slush and ice particles in convective clouds to, more speculatively, a rain of charged solar particles seeding the skies with electrical charge. Or perhaps the uncertainty about lightning formation is not surprising, given all that remains unknown about clouds and the perils of studying a storm—an electrical discharge can deliver millions of joules of energy in milliseconds.

But Brazilian researchers claim that their lab experiments imply that the water droplets that make up such storms can carry charge—an overturning of decades of scientific understanding that such water droplets must be electrically neutral. Specifically, chemists led by Fernando Galembeck of the University of Campinas found that when electrically isolated metals were exposed to high humidity—lots and lots of tiny water droplets known as vapor—the metals gained a small negative charge.

The same holds true for many other metals, according to Galembeck’s presentation at the American Chemical Society meeting in Boston on August 25—a phenomenon they’ve dubbed hygroelectricity, or humid electricity. “My colleagues and I found that common metals—aluminum, stainless steel and others—acquire charge when they are electrically isolated and exposed to humid air,” he says. “This is an extension to previously published results showing that insulators acquire charge under humid air. Thus, air is a charge reservoir.” The finding would seem to confirm anecdotes from the 19th century of workers literally shocked—rather than scalded—by steam. And it might explain how enough charge builds up for lightning, Galembeck argues.

The scientists envision devices to harness this charge out of thick (with water vapor) air—a metal piece, like a lightning rod, connected to one pole of a capacitor, a device for separating and storing electric charge. The other pole of the capacitor is grounded. Expose the metal to high humidity (perhaps within a shielded box) and harvest voltage. “If this could be done safely, it would allow us to have better control of thunderstorms,” Galembeck says, envisioning a renewable energy source from the humid air of the tropics and mid-latitudes.

Unfortunately, the finding violates the principle of electric neutrality, in which the differently charged molecules of an electrolyte like water cancel out. And although geophysicists and other atmospheric scientists may not know all the details of how lightning forms, they do have a general sense, and hygroelectricity seems to ignore what is largely understood. “It is utter nonsense,” says atmospheric physicist William Beasley of the University of Oklahoma, a lightning researcher. “All seriously considered mechanisms for electrification of thunderstorms that can lead to the kind of electric fields that are required for lightning involve convection and rebounding collisions between graupel [a slush ball] and ice particles in convective storms.”

Similar efforts to capture the electricity in a lightning bolt have failed, most recently, Alternate Energy Holdings’s would-be lightning capture tower outside Houston. The wired tower never worked. “This concept has been disproven many times over,” Beasley notes. What’s more, the amount of energy in a lightning bolt—never mind its crackling electric grandeur—is but a fraction of the amount of energy required to run even one 100-watt lightbulb, which uses 100 joules every second, for a day. But taming lightning is a prospect that has tempted experimenters since at least the Olympian thunderbolts of Zeus. Of course, the vast majority of the energy is in the storm itself—hurricanes, for example, have the heat energy of 10,000 nuclear bombs. Capturing that energy might prove frazzling.

Can we grab electricity from muggy ai?
by Colin Barras / 26 August 2010

Every cloud has a silver lining: wet weather could soon be harnessed as a power source, if a team of chemists in Brazil is to be believed. In 1840, workers in Newcastle upon Tyne, UK, reported painful electric shocks when they came into close contact with steam leaking from factory boilers. Both Michael Faraday and Alessandro Volta puzzled over the mysterious phenomenon, dubbed steam electricity, but it was ultimately forgotten without being fully understood.

Fernando Galembeck at the University of Campinas in São Paulo, Brazil, is one of a small number of researchers who thinks there is a simple explanation, but it involves accepting that water can store charge – a controversial idea that violates the principle of electroneutrality. This principle – which states that the negatively and positively charged particles in an electrolyte cancel each other out – is widely accepted by chemists, including the International Union of Pure and Applied Chemistry (IUPAC). “I don’t dispute the IUPAC statement for the principle of electroneutrality,” says Galembeck. “But it is seldom applicable to real substances,” he says, because they frequently show ion imbalances, which produce a measurable charge.

His team electrically isolated chrome-plated brass tubes and then increased the humidity of the surrounding atmosphere. Once the relative humidity reached 90 per cent, the uncharged tube gained a small but detectable negative charge of 300 microcoulombs per square metre – equating to a capacity millions of times smaller than that of an AA battery.

Sensitive Victorians
The Victorian workers would have had to have been particularly sensitive souls to complain of such a shock, but Galembeck thinks his study shows steam electricity may be a credible phenomenon. He thinks the charge builds up because of a reaction between the chrome oxide layer that forms on the surface of the tube and the water in the atmosphere. As the relative humidity rises, more water condenses onto the tube’s surface. Hydrogen ions in the water react with the chrome oxide, leading to an ion imbalance that imparts excess charge onto the isolated metal.

The work finds favour with Gerald Pollack at the University of Washington in Seattle. Last year he suggested that pure water could store charge and behave much like a battery, after finding that passing a current between two submerged electrodes created a pH gradient in the water that persisted for an hour once the current had been switched off. He says this is evidence that the water stores areas of positive and negative charge, but the experiment led to a lively debate in the pages of the journal Langmuir over whether the results really violated the principle of electroneutrality or whether there were salt impurities in the water that led it to behave like a conventional electrochemical cell. Pollack calls the Campinas team’s work “interesting”. “It opens the door to many new possibilities,” he says.

Power from air
Galembeck thinks those possibilities include harnessing atmospheric humidity as a renewable power source, as light is converted to electricity in solar panels. “My work is currently targeted to verify this possibility and to explore it,” he says. However, he acknowledges that most researchers remain to be convinced that what he calls “hygroelectricity” will ever get off the ground.

Allen Bard at the University of Texas falls within that majority. “In general I think that it is true that our understanding of electrostatic phenomena and charging at solid/gas interfaces is incomplete,” he says. “I am, however, very sceptical about these phenomena being harnessed as a power source. The amounts of charge and power involved are very small.”

References: Galembeck presents his work at a national meeting of the American Chemical Society in Boston this week; it was previously published in Langmuir, DOI: 10.1021/la102494k. Pollack’s work was published in Langmuir, DOI: 10.1021/la802430k; the resulting debate in the journal can be followed here, here, and here.

Electricity collected from the air could become the newest alternative energy source
Aug. 25, 2010

Imagine devices that capture electricity from the air ― much like solar cells capture sunlight ― and using them to light a house or recharge an electric car. Imagine using similar panels on the rooftops of buildings to prevent lightning before it forms. Strange as it may sound, scientists already are in the early stages of developing such devices, according to a report presented here today at the 240th National Meeting of the American Chemical Society (ACS). “Our research could pave the way for turning electricity from the atmosphere into an alternative energy source for the future,” said study leader Fernando Galembeck, Ph.D. His research may help explain a 200-year-old scientific riddle about how electricity is produced and discharged in the atmosphere. “Just as solar energy could free some households from paying electric bills, this promising new energy source could have a similar effect,” he maintained. “If we know how electricity builds up and spreads in the atmosphere, we can also prevent death and damage caused by lightning strikes,” Galembeck said, noting that lightning causes thousands of deaths and injuries worldwide and millions of dollars in property damage.

The notion of harnessing the power of electricity formed naturally has tantalized scientists for centuries. They noticed that sparks of static electricity formed as steam escaped from boilers. Workers who touched the steam even got painful electrical shocks. Famed inventor Nikola Tesla, for example, was among those who dreamed of capturing and using electricity from the air. It’s the electricity formed, for instance, when water vapor collects on microscopic particles of dust and other material in the air. But until now, scientists lacked adequate knowledge about the processes involved in formation and release of electricity from water in the atmosphere, Galembeck said. He is with the University of Campinas in Campinas, SP, Brazil.

Scientists once believed that water droplets in the atmosphere were electrically neutral, and remained so even after coming into contact with the electrical charges on dust particles and droplets of other liquids. But new evidence suggested that water in the atmosphere really does pick up an electrical charge. Galembeck and colleagues confirmed that idea, using laboratory experiments that simulated water’s contact with dust particles in the air. They used tiny particles of silica and aluminum phosphate, both common airborne substances, showing that silica became more negatively charged in the presence of high humidity and aluminum phosphate became more positively charged. High humidity means high levels of water vapor in the air ― the vapor that condenses and becomes visible as “fog” on windows of air-conditioned cars and buildings on steamy summer days. “This was clear evidence that water in the atmosphere can accumulate electrical charges and transfer them to other materials it comes into contact with,” Galembeck explained. “We are calling this ‘hygroelectricity’, meaning ‘humidity electricity’.”

In the future, he added, it may be possible to develop collectors, similar to the solar cells that collect the sun to produce electricity, to capture hygroelectricity and route it to homes and businesses. Just as solar cells work best in sunny areas of the world, hygroelectrical panels would work more efficiently in areas with high humidity, such as the northeastern and southeastern United States and the humid tropics. Galembeck said that a similar approach might help prevent lightening from forming and striking. He envisioned placing hygroelectrical panels on top of buildings in regions that experience frequent thunderstorms. The panels would drain electricity out of the air, and prevent the building of electrical charge that is released in lightning. His research group already is testing metals to identify those with the greatest potential for use in capturing atmospheric electricity and preventing lightning strikes. “These are fascinating ideas that new studies by ourselves and by other scientific teams suggest are now possible,” Galembeck said. “We certainly have a long way to go. But the benefits in the long range of harnessing hygroelectricity could be substantial.”

Fernando Galembeck
email : fernagal [at] igm.unicamp [dot] br

Gerald Pollack
email : ghp [at] u.washington [dot] edu


Dr. Gerald Pollack’s views on water have been called revolutionary. He attests that, despite what Mr. Wizard may have taught you, there are actually four phases of water: solid, liquid, vapor and gel. This fourth phase, Pollack says, may in fact be the most important of all. “If you want to understand what happens in any system – be it biological, or physical, or chemical, or oceanographic, or atmospheric, or whatever – it doesn’t matter, anything involving water, you really have to know the behavior of this special kind of gel-like water, which dominates.”

Pollack’s water studies have led to amazing possibilities: that water acts as a battery, that this battery may recharge in a way resembling photosynthesis, that these water batteries could be harnessed to produce electricity. He discusses these ideas in a lecture now playing on UWTV: “Water, Energy and Life: Fresh Views From the Water’s Edge.” Yet the search for these fresh views has not been without struggle. “Before I became controversial, I almost never had a problem; I had large amounts of funding,” Pollack, a UW professor of bioengineering, explained. “The more controversial I became, the more difficult it’s been to get money. There were several really dry years. “And now it’s gotten better because I think people are beginning to recognize the importance of the work on water. So it’s improving, but it’s still not easy.”

The study of water has a long history of unpopularity, Pollack said. “Six or seven decades ago, water was a really interesting subject. A lot of people thought that water had a particular chemistry – that it interacted with other molecules and was really an important feature of any system that contained water. Then, research almost stopped 40 years ago. There were two scientific debacles that took place that made everybody highly skeptical of any kind of research on water.” The first of these concerned polywater. “Some findings seemed to imply that water acted as though it was a polymer; in other words, all the molecules would somehow join together into a polymer and create some really weird kinds of effects,” Pollack described. Eventually, these results – first presented by a Russian chemist – were discredited. “The nails were driven into the coffin of water research by another debacle that took place 20 years later, and that was the idea of water memory,” Pollack said. “The idea was that water molecules could have memory of other substances into which it had been in contact.”

A debate in the science journal Nature eventually moved public opinion against this theory as well. “So because of these two incidents, scientists absolutely stayed away from water because water research was treacherous,” Pollack said. “You could drown in your own water.” Yet, these murky waters were not enough to deter Pollack from the subject. He first broached the topic in his 2001 book “Cells, Gels and the Engines of Life.” “The book asserts, contrary to the textbook view, that water is the most important and central protagonist in all of life,” Pollack said. “There are so many realms of science where water is central. In order to understand how everything works, you need to know the properties of water.” As Pollack sought to understand water, his focus turned to a particular phase near hydrophilic surfaces that didn’t quite fit in. “The three phases of water that everybody knows about in the textbook just don’t do it. In fact, it’s a 100-year-old idea that there’s a fourth phase of water. This is not an original idea.” Though the concept of a liquid crystalline, or gel-like, phase of water has been around for some time, the generally accepted view is that this kind of water is only two or three molecular layers thick. “And what we found in our experiments is that it’s not two or three layers, but two or three million layers. In other words, it’s the dominant feature,” Pollack said.

With this revelation in hand, Pollack focused his attention on this mostly unstudied phase of water. He has since discovered much about its underestimated thickness, its capacity to create a charge, its connections to photosynthesis and its practical applications. The thickness of this gel-like water may explain why items of higher density than water – such as a coin – can float. Surface tension is at work, but it arises from this thick, gel-like surface layer. “Turns out that the thickness depends on the pH,” Pollack said. “If you increase the pH, we found that this region gets thicker. It also gets thicker with time. So if you wait long enough, and if you have the right conditions, and maybe enough light beating down on it, you could conceivably get a very thick layer. “If we come up with the right conditions, maybe it’s true that we can walk on water – if this region can be made thick enough.”

Biblical aspirations aside, the energy carried within this water and the water near it may be even more impressive. Dr. Pollack works in his lab to demonstrate some of the unusual properties of water. “This kind of water is negative, and the water beyond is positive. Negative, positive – you have a battery,” Pollack explained. “The question is, how is it used and might we capitalize on this kind of battery?” The key to understanding how this water battery works is learning how it is recharged. “You can’t just get something for nothing – there has to be energy that charges it,” Pollack said. “This puzzled us for several years, and finally we found the answer: it’s light. It was a real surprise. So if you take one of these surfaces next to water, and you see the battery right next to it, and you shine light on it, the battery gets stronger. It’s a very powerful effect.” This effect takes on entirely new possibilities when considered in terms of the water within our bodies. “I’m suggesting that you – inside your body – actually have these little batteries, and, remember, the batteries are fueled by light,” Pollack said. “Why don’t we photosynthesize? And the answer is, probably we do. It may not be the main mechanism for getting energy, but it certainly could be one of them. In some ways, we may be more like plants and bacteria than we really think.”

All of these innovative ideas may have practical applications as well. Water in its gel-like phase excludes solutes. “It’s actually pretty pure,” Pollack explained. “If you could collect this water right near the surface, it should be free of bacteria, for example, and maybe also viruses. So we’ve constructed a prototype device in the laboratory that shows excellent separation, on the order of 200 to 1. And we’re now trying to scale this up to practical quantities of water that could be filtered.” A second possibility is extracting electrical energy from this natural water battery. “We’ve so far been able to get only small amounts of electrical energy out, but we just started the project,” Pollack said. “If this process that we found is the same as photosynthesis, or the same principle, and I do think it may be, then it’s a pretty efficient system.”

Pollack and other researchers clearly have a long and complex challenge ahead as they seek to understand water in new ways. But you don’t have to know Pollack well to see that the challenge itself is part of the intrigue of pursuing such work. “I’m so compelled to continue our studies because they reveal so much and they answer so many questions – even already – questions that have remained unanswered for so long. For Pollack, finding answers is a way of life. “I dream this stuff,” he confessed. “It never leaves me. If I’m sitting on the plane, sitting on the toilet seat, standing in the shower, it’s on my mind always. “When I see something in nature that doesn’t seem right, or doesn’t seem explained yet, I just can’t stop thinking about it. Thinking about how it might work. I dwell on the problem. I never stop.”


Daniel Nocera
email : nocera [at] mit [dot] edu

by David L. Chandler / May 14, 2010

Expanding on work published two years ago, MIT’s Daniel Nocera and his associates have found yet another formulation, based on inexpensive and widely available materials, that can efficiently catalyze the splitting of water molecules using electricity. This could ultimately form the basis for new storage systems that would allow buildings to be completely independent and self-sustaining in terms of energy: The systems would use energy from intermittent sources like sunlight or wind to create hydrogen fuel, which could then be used in fuel cells or other devices to produce electricity or transportation fuels as needed.

Nocera, the Henry Dreyfus Professor of Energy and Professor of Chemistry, says that solar energy is the only feasible long-term way of meeting the world’s ever-increasing needs for energy, and that storage technology will be the key enabling factor to make sunlight practical as a dominant source of energy. He has focused his research on the development of less-expensive, more-durable materials to use as the electrodes in devices that use electricity to separate the hydrogen and oxygen atoms in water molecules. By doing so, he aims to imitate the process of photosynthesis, by which plants harvest sunlight and convert the energy into chemical form.

Nocera pictures small-scale systems in which rooftop solar panels would provide electricity to a home, and any excess would go to an electrolyzer — a device for splitting water molecules — to produce hydrogen, which would be stored in tanks. When more energy was needed, the hydrogen would be fed to a fuel cell, where it would combine with oxygen from the air to form water, and generate electricity at the same time. An electrolyzer uses two different electrodes, one of which releases the oxygen atoms and the other the hydrogen atoms. Although it is the hydrogen that would provide a storable source of energy, it is the oxygen side that is more difficult, so that’s where he and many other research groups have concentrated their efforts. In a paper in Science in 2008, Nocera reported the discovery of a durable and low-cost material for the oxygen-producing electrode based on the element cobalt.

Now, in research being reported this week in the journal Proceedings of the National Academy of Science (PNAS), Nocera, along with postdoctoral researcher Mircea Dincă and graduate student Yogesh Surendranath, report the discovery of yet another material that can also efficiently and sustainably function as the oxygen-producing electrode. This time the material is nickel borate, made from materials that are even more abundant and inexpensive than the earlier find. Even more significantly, Nocera says, the new finding shows that the original compound was not a unique, anomalous material, and suggests that there may be a whole family of such compounds that researchers can study in search of one that has the best combination of characteristics to provide a widespread, long-term energy-storage technology. “Sometimes if you do one thing, and only do it once,” Nocera says, “you don’t know — is it extraordinary or unusual, or can it be commonplace?” In this case, the new material “keeps all the requirements of being cheap and easy to manufacture” that were found in the cobalt-based electrode, he says, but “with a different metal that’s even cheaper than cobalt.” The work was funded by the National Science Foundation and the Chesonis Family Foundation.

But the research is still in an early stage. “This is a door opener,” Nocera says. “Now, we know what works in terms of chemistry. One of the important next things will be to continue to tune the system, to make it go faster and better. This puts us on a fast technological path.” While the two compounds discovered so far work well, he says, he is convinced that as they carry out further research even better compounds will come to light. “I don’t think we’ve found the silver bullet yet,” he says. Already, as the research has continued, Nocera and his team have increased the rate of production from these catalysts a hundredfold from the level they initially reported two years ago.

John Turner, a research fellow at the National Renewable Energy Laboratory in Colorado, calls this a nice result, but says that commercial electrolyzers already exist that have better performance than these new laboratory versions. “The question then is under what circumstances would this system provide some advantage over the existing commercial systems,” he says. For large-scale deployment of solar fuel-producing systems, he says, “the big commercial electrolyzers use concentrated alkali for their electrolyte, which is OK in an industrial setting were engineers know how to handle the stuff safely; but when we are talking about thousands of square miles of solar water-splitting arrays, and individual homeowners, then an alternative electrolyte like this benign borate solution may be more viable.” The original discovery has already led to the creation of a company, called Sun Catalytix, that aims to commercialize the system in the next two years. And his research program was recently awarded a major grant from the U.S. Department of Energy’s Advanced Research Projects Agency.



BY John Borland  /  December 12, 2007
“This system began observing an enormous substorm, or Northern Light event, on March 23, which helped trigger the discoveries. The storm moved faster than anyone had expected, crossing 15 degrees of longitude in a single minute, about 400 miles per hour. The entire two-hour event released about five hundred thousand billion Joules, or about as much energy as a magnitude 5.5 earthquake, researchers said. Over the next few months, the spacecraft encountered what researchers call magnetic ropes, essentially bundles of magnetic fields that are twisted together like twine. The first to be mapped by the THEMIS satellites was located about 40,000 miles above the Earth’s surface, in the magnetopause, and about as wide as the Earth itself. The magnetopause is the region where the solar wind – electrically charged particles that flow away from the sun at incredible speeds – crashes into the Earth’s magnetic field. The “rope” formed there and unraveled again over the course of just a few minutes, but in the process proved to be a significant conduit for solar wind energy. “The satellites have found evidence of magnetic ropes connecting Earth’s upper atmosphere directly to the sun,” said David Sibeck, project scientist for the mission at NASA’s Goddard Space Flight Center. “We believe that solar wind particles flow in along these ropes, providing energy for geomagnetic storms and auroras.”  The scientists have also observed the equivalent of a “bow shock,” as at the leading edge of a boat, where the front edge of  Earth’s magnetic field first encounters the solar wind. Occasionally a burst of electrical current in the solar wind will hit this “bow shock,” creating an explosion, researchers said.”

Space scientists at UCLA solve the mystery behind aurora borealis
BY Stuart Wolpert  /  7/24/2008

UCLA space scientists and colleagues have identified the mechanism that triggers substorms in space; wreaks havoc on satellites, power grids and communications systems; and leads to the explosive release of energy that causes the spectacular brightening of the aurora borealis, also known as the northern lights. For 30 years, there have been two competing theories to explain the onset of these substorms, which are energy releases in the Earth’s magnetosphere, said Vassilis
Angelopoulos, a UCLA professor of Earth and space sciences and principal investigator of the NASA-funded mission known as THEMIS (Time History of Events and Macroscale Interactions during Substorms).

One theory is that the trigger happens relatively close to Earth, about one-sixth of the distance to the moon. According to this theory, large currents building up in the space environment, which is composed of charged ions and electrons, or “plasma,” are suddenly released by an explosive instability. The plasma implodes toward Earth as the space currents are disrupted, which is the start of the substorm. A second theory says the trigger is farther out, about one-third of the distance to the moon, and involves a different process: When two magnetic field lines come close together due to the storage of energy from the sun, a critical limit is reached and the magnetic field lines reconnect, causing magnetic energy to be transformed into kinetic energy and heat. Energy is released, and the plasma is accelerated, producing accelerated electrons.

Which theory is right?
“Our data show clearly and for the first time that magnetic reconnection is the trigger,” said Angelopoulos, who reports the research in the July 24 online issue of the journal Science. “Reconnection results in a slingshot acceleration of waves and plasma along magnetic field lines, lighting up the aurora underneath even before the near-Earth space has had a chance to respond. We are providing the evidence that this is happening.” Previous studies of the Earth’s magnetosphere and space weather have been unable to pinpoint the origin of substorms, which are large magnetic disturbances. Ionized gas emitted from the sun’s surface speeds up as it moves away from the sun, attaining speeds of 1 million mph and interacting with the Earth’s upper atmosphere, which is also
ionized, Angelopoulos said. Substorms are building blocks of larger storms. “We need to understand this environment and eventually be able to predict when these large energy releases will happen so astronauts can go inside their spacecraft and we can turn off critical systems on satellites so they will not be damaged,” Angelopoulos said. “This has been exceedingly difficult in the past, because previous missions, which measured the plasma at one location, were unable to determine the origin of the large space storms. To resolve this question properly requires correlations and signal-timing at multiple locations. This is precisely what was missing until now.”

At high northern latitudes in the northern U.S. and Canada, shimmering bands of light called the aurora borealis, or northern lights, stretch across the sky from the east to the west. During the geomagnetically disturbed periods known as substorms, these bands of light brighten. These multicolored light shows are generated when showers of high-speed electrons descend along magnetic field lines to strike the Earth’s upper atmosphere. Scientists want to learn when, where and why solar wind energy stored within the Earth’s magnetosphere is explosively released to accelerate these electrons. THEMIS is establishing for the first time when and where substorms begin, determining how the individual components of substorms interact, and discovering how substorms power the aurora borealis. “We discovered what sparks the magnificent light show of the aurora,” Angelopoulos said.

THEMIS has five satellites — with electric, magnetic, ion and electron detectors — in carefully chosen orbits around the Earth and an array of 20 ground observatories with automated, all-sky cameras located in the northern U.S. and Canada that catch substorms as they happen. The ground observatories take images of the aurora in white light. One satellite is a third of the distance to the moon, one is about a fourth of the distance and three are about a sixth of the distance. The outermost satellite takes four days to orbit the Earth, the next one two days, and the closest ones orbit the Earth in just one day. Every four days, the satellites line up. As the satellites are measuring the magnetic and electric fields of the plasma above the Earth’s atmosphere once every four days, the ground-based observatories are imaging the auroral lights and the electrical currents from space that generate them. THEMIS was launched on Feb. 17, 2007, from Cape Canaveral, Fla., and is expected to observe approximately 30 substorms in its nominal lifetime. “Armed with this knowledge, we are not only putting to rest age-old questions about the origin of the spectacular auroral eruptions but will also be able to provide statistics on substorm evolution and model its effects on space weather,” Angelopoulos said.

The project received a NASA outstanding performance group award this May. THEMIS is managed by the Explorers Program Office at Goddard Space Flight Center in Maryland. THEMIS mission co-investigators include Christopher T. Russell, UCLA professor of geophysics and space physics and a co-author on the Science paper; Margaret G. Kivelson, professor of space physics in the UCLA Department of Earth and Space Sciences; Krishan Khurana, a researcher in the UCLA Department of
Earth and Space Sciences; and scientists from UC Berkeley, where the mission was put together and half the instruments were built, Germany, Austria, France, Russia, Japan, Canada and the U.S. Themis was the blindfolded Greek goddess of order and justice. In 1619 A.D., Galileo Galilei coined the term “aurora borealis” after Aurora, the Roman goddess of morning. He had the misconception that the auroras he saw were due to sunlight reflecting from the atmosphere.


Vassilis Angelopoulos
email : vassilis [at] igpp.ucla [dot] edu / vassilis [at] ucla [dot] edu

Published Online July 24, 2008  /  Science DOI: 10.1126/science.

Tail Reconnection Triggering Substorm Onset
Magnetospheric substorms explosively release solar wind energy previously stored in Earth’s magnetotail, encompassing the entire magnetosphere and producing spectacular auroral displays. It has been unclear whether a substorm is triggered by a disruption of the electrical current flowing across the near-Earth magnetotail, at ~10 RE (RE = Earth Radius, or 6374 km), or by the process of magnetic reconnection typically seen farther out in the magnetotail, at ~20 to 30 RE. We report on simultaneous measurements in the magnetotail at multiple distances, at the time of substorm onset. Reconnection was observed at 20 RE, at least 1.5 min before auroral intensification, at least 2 min before near-Earth current disruption, and about 3 min before substorm expansion. These results demonstrate that substorms are likely initiated by tail reconnection.

Vassilis Angelopoulos 1*, James P. McFadden 2, Davin Larson 2, Charles W. Carlson 2, Stephen B. Mende 2, Harald Frey 2, Tai Phan 2, David G. Sibeck 3, Karl-Heinz Glassmeier 4, Uli Auster 4, Eric Donovan 5, Ian R. Mann 6, I. Jonathan Rae 6, Christopher T. Russell 1, Andrei Runov 1, Xu-Zhi Zhou 1, Larry Kepko 7

1 IGPP/ESS, UCLA, Los Angeles, CA, USA.
2 Space Sciences Laboratory, University of California at Berkeley, CA, USA.
3 Code 674, NASA/GSFC, Greenbelt, MD, USA.
4 TUBS, Braunschweig, D-38106, Germany.
5 Department of Physics and Astronomy, University of Calgary, Calgary, Canada.
6 Department of Physics, University of Alberta, Edmonton, Alberta, Canada.
7 Space Science Center, University of New Hampshire, Durham, NH, USA.



“On January 4, 2008, a reversed-polarity sunspot appeared—and this signals the start of Solar Cycle 24,” says David Hathaway of the Marshall Space Flight Center. Solar activity waxes and wanes in 11-
year cycles. Lately, we’ve been experiencing the low ebb, “very few flares, sunspots, or activity of any kind,” says Hathaway. “Solar minimum is upon us.” The previous solar cycle, Solar Cycle 23, peaked in 2000-2002 with many furious solar storms. That cycle decayed as usual to the present quiet leaving solar physicists little to do other than wonder, when would the next cycle begin? The answer is now. “New solar cycles always begin with a high-latitude, reversed polarity sunspot,” explains Hathaway. “Reversed polarity” means a sunspot with opposite magnetic polarity compared to sunspots from the previous solar cycle. “High-latitude” refers to the sun’s grid of latitude and longitude. Old cycle spots congregate near the sun’s equator. New cycle spots appear higher, around 25 or 30 degrees latitude.

The sunspot that appeared on January 4th fits both these criteria. It was high latitude (30 degrees N) and magnetically reversed. NOAA named the spot AR10981, or “sunspot 981” for short. Sunspot 981 was small– only about as wide as Earth, which counts as small on the grand scale of the sun–and it has already faded away. But its three day appearance on Jan. 4-6 was enough to convince most solar physicists that Solar Cycle 24 is underway. Doug Biesecker of NOAA’s Space Weather Prediction Center in Boulder, Colorado, likens sunspot 981 “to the first robin of spring. There’s still snow on the ground, but the seasons are changing.” Last year, Biesecker chaired the Solar Cycle 24 Prediction Panel, an international group of experts from many universities and government agencies. “We predicted that Solar Cycle 24 would begin around March 2008 and it looks like we weren’t far off,” he says.

The onset of a new solar cycle is significant because of our increasingly space-based technological society. “Solar storms can disable satellites that we depend on for weather forecasts and GPS
navigation,” says Hathaway. Radio bursts from solar flares can directly interfere with cell phone reception while coronal mass ejections (CMEs) hitting Earth can cause electrical power outages. “The most famous example is the Quebec outage of 1989, which left some Canadians without power for as much as six days.” Air travel can be affected, too. Every year, intercontinental flights
carry thousands of passengers over Earth’s poles. It’s the shortest distance between, say, New York and Tokyo or Beijing and Chicago. In 1999, United Airlines made just twelve trips over the Arctic. By 2005, the number of flights had ballooned to 1,402. Other airlines report similar growth. “Solar storms have a big effect on polar regions of our planet,” says Steve Hill of the Space Weather Prediction Center. “When airplanes fly over the poles during solar storms, they can experience radio blackouts, navigation errors and computer reboots all caused by space radiation.” Avoiding the poles during solar storms solves the problem, but it costs extra time, money and fuel to “take the long way around.”

Now for the good news: More solar storms also means more auroras—”the greatest show on Earth.” During the last solar maximum, Northern Lights were spotted as far south as Arizona, Florida and California. Not so long ago, only visitors to the Arctic regularly enjoyed auroras, but with increasing attention to space weather and constantly improving forecasts, millions of people at all latitudes will know when to go out and look. Much of this is still years away. “Intense solar activity won’t begin immediately,” notes Hathaway. “Solar cycles usually take a few years to build from solar minimum (where we are now) to Solar Max, expected in 2011 or 2012.”


Virgin Galactic to Offer Space Cruise through Aurora Borealis  /
January 09, 2008

Imagine what kind of spectacular show it would be like to fly into the heart of the Northern Lights. You may not have to imagine forever. Richard Branson has been busy thinking up new ways to get people excited about private space tourism, and he’s come up with something pretty spectacular. He’s offering to fly the affluent into the world’s biggest lightshow, the Aurora Borealis. The New Mexico Virgin Galactic Spaceport isn’t scheduled for completion until 2010, but Branson is already planning his next project from an Arctic launchpad located in the far north of Sweden in the small town of Kiruna. The Arctic location provides the town with unrivalled views of the spectacular

The aurora borealis is named after the Roman goddess of the dawn, Aurora, and the Greek name for north wind, Boreas. It often appears as a greenish glow with hints of red and purple. The green and red emissions come from atomic oxygen. Molecular nitrogen and nitrogen ions produce some of the low level red and very high blue /violet aurorae. The lights most often occur from September to October and from March to April. The Auroras are produced by the collision of charged particles from the magnetosphere, with atoms and molecules of the Earth’s upper atmosphere. The particles originate from the sun and arrive at the vicinity of earth in the relatively low-energy solar wind. Magnetic reconnection accelerates the particles towards earth. Kiruna already has an existing base called Esrange. Launching humans into an active aurora is more for excitement than science, but it has been deemed to be safe. Dr Olle Norberg, Esrange’s director, said they’ve done the research. “Is there a build-up of charge on the spacecraft? What is the radiation dose that you would receive? Those studies came out saying it is safe to do this.” Safe, and undoubtedly an incredible view.




“This Quicktime Movie (3.9MB) allows you to hear the plasma waves observed by the Galileo Plasma Wave Receiver as it flew past Ganymede. The image is a dynamic spectrogram showing the intensity of waves as a function of frequency (vertical axis) and time (horizontal axis) in which red indicates high intensity waves and blue indicates low intensities. This spectrogram was obtained by Fourier transforming the actual waveform from the electric antenna at a sample rate of 201,600 samples per second. We have used the same waveform to generate an audio signal but have used a sample rate of about a factor of 9 slower in order to shift the 80-kHz bandwidth down into the audio frequency range. We have also used a technique called time-slicing to reduce the 45-minute recording to just one minute. The cursor moves across the spectrogram as the audio signal is played.”

Earth’s poles long overdue for reversal
BY Claire Thomas  /  5 May 2008

A reversal of the Earth’s magnetic poles could happen sooner than we think, according to Dutch scientists who report that the planet’s magnetic field is becoming gradually less stable. A reversal could affect everything from navigation and communications equipment to the composition of the atmosphere, say experts. The report, published today in the U.K. journal Nature Geoscience, found that reversals have been far more common in the last 200 million years than they were deep in the planet’s history. Researchers, led by Andrew Biggin of the University of Utrecht in the Netherlands, made the discovery by analysing rocks formed between 2.45 to 2.82 billion years ago. The story of the Earth’s magnetic field is written in rocks over time. Because these rocks become ‘magnetised’ at
the time of their formation, scientists can discover which direction the poles were facing and how strong the Earth’s magnetic field was at that time. The magnetic poles wander around the vicinity of the geographic poles all the time – the north magnetic pole currently resides in the Canadian Arctic. However, at relatively regular intervals throughout the 4.5 billion year history of the planet, the magnetic poles have flipped completely. A few thousand years before a reversal, the magnetic field gradually gets weaker; something which could cause problems for inhabitants of the planet. “The Earth’s magnetic field is important for shielding the atmosphere, and us, from damage caused by the solar wind,” explained Biggin. “It’s also used by us and other species for navigation”. An increase in solar wind would disrupt communications equipment and power grids.

Current records suggest that we are long overdue for our next reversal, he said. “On average, there is a reversal around every 400,000 years, but this varies a lot.” The geological record suggests that the last reversal was around 800,000 years ago. Furthermore, there is already evidence to show that the field has been weakening over the last few centuries – some archaeological remains suggest that the field was far stronger in the time of the Roman Empire, some 2,000 years ago. Don’t throw away your compass just yet though – major changes may not even happen in our lifetimes. “The reversal process is very unpredictable,” said Biggin. “We could be heading into a reversal in the next few centuries, or we might be waiting another million years”. Even then, reversal is a slow process, which can take some thousands of years to complete. But what about the effect on living organisms? Another paper, published in Nature in March suggested that some species that rely on the field for navigation or orientation have taken a knock from pole reversals in the past. Author David Gubbins, of the University of Leeds in England, said that some single-celled organisms that relied on magnetism to tell up from down likely went extinct during past reversals. Human beings have survived reversals in the past, however, added Gubbins, “so we are likely to come through the next one unscathed.”

Andrew Biggin
email : biggin [at] geo [dot] uu [dot] nl

David Gubbins
d.gubbins [at] [dot] uk









Weather observed on a star for the first time
by Jeff Hecht  /  25 June 2007

Weather – caused by the same forces as the weather on Earth – has been seen on a star for the first time, reveal observations of mercury clouds on a star called Alpha Andromedae. Previously, astronomers had thought that any structures on stars were caused by magnetic fields. Sunspots, for example, are relatively cool regions on the Sun where strong magnetic fields prevent energy from flowing outwards. But now, seven years of painstaking observations of Alpha Andromedae show that stars do not need magnetic fields to form clouds after all. Lying about 100 light years away, it is one of a class of stars unusually rich in mercury and manganese. Earlier observations of similar stars had revealed uneven distributions of mercury, but all of them had strong magnetic fields. The relatively massive stars do not mix gases in their atmospheres, which less massive stars, like the Sun, do. So the balance between the pull of gravity and the push of radiation pressure concentrates some heavy elements at certain atmospheric levels. At that point, their magnetic fields were thought to continue the separation process, sequestering some chemicals in particular regions. But researchers led by Oleg Kochukhov of Uppsala University in Sweden have found that this last step is not necessary to create chemical clouds on a star.

Driven by the tides
They observed the mercury concentration in Alpha Andromedae – which does not have a detectable magnetic field – for seven years with 1.2- and 6-metre telescopes, detecting the mercury by its signature absorption line in the violet end of the spectrum. They resolved details on the spinning star’s surface by looking at how rapidly the clouds were turning towards or away from Earth. That revealed that the mercury concentration varies by as much as a factor of 10,000 across its surface, and the pattern of concentration changes as well. The evidence for changes in the mercury distribution over time “look very convincing”, comments Gregg Wade of the Royal Military College of Canada in Kingston, who discovered in 2006 that the star lacked a magnetic field. But exactly what causes the clouds to change over time is unclear. Kochukhov and colleagues say the changes “may have the same underlying physics as the weather patterns on terrestrial and giant planets”. The mercury clouds are on the brighter and larger member of a close pair of stars that orbit each other every 97 days. “The second star may create tides on the surface of the main star, much like the Moon creates tides on Earth, which drives evolution of the mercury cloud cover,” Kochukhov told New Scientist. But he adds that other explanations are possible. So for now, the weather on stars, as on Earth, remains hard to fathom.

{Journal reference: Nature Physics (doi: 10.1038/nphys648)}



Should every computer chip have a cosmic ray detector?  /  March 07, 2008

How can distant supernovae, black holes and other cosmic events cause a desktop computer to crash? The answer is that they produce cosmic rays, which produce high energy particles in the atmosphere that can occasionally hit RAM chips. The moving particles trail electrons, which can infiltrate chips’ circuits and cause errors. That’s why computer chip giant Intel was in December awarded a US patent for the idea of building cosmic ray detectors into every chip. When cosmic rays hit the Earth’s atmosphere, they collide with air molecules, producing an “air shower” of high energy protons and neutrons and other particles. It is these that Intel wants to look for. If they get near the wrong part of a chip, the electrons they trail can create a digital 1 or 0 out of nowhere, something called a “soft error”.

Computer giant IBM thoroughly investigated the problem in the mid 90s, testing nearly 1,000 memory devices at sea level, in mountains and in caves. They showed that at higher altitude, more soft errors occurred, while in the caves there were nearly none. That proved cosmic rays were to blame. As RAM chips became more dense, the problem was predicted to get worse. But better designs and error checking techniques have helped, with systems used in planes and spacecraft getting beefed-up error checking because they are at greater risk.

But Intel thinks we may still be living on borrowed time: “Cosmic ray induced computer crashes have occurred and are expected to increase with frequency as devices (for example, transistors) decrease in size in chips. This problem is projected to become a major limiter of computer reliability in the next decade.” Their patent suggests built-in cosmic ray detectors may be the best option. The detector would either spot cosmic ray hits on nearby circuits, or directly on the detector itself. When triggered, it could activate error-checking circuits that refresh the nearby memory, repeat the most recent actions, or ask for the last message from outside circuits to be sent again.

But if cosmic ray detectors make it into desktops, would we get to know when they find something? It would be fun to suddenly see a message pop up informing a cosmic ray had been detected. I haven’t seen any recent figures on how often they happen, but back in 1996 IBM estimated you would see one a month for every 256MB of RAM. Perhaps it could even be useful to astronomers, if everyone shared that data, like this idea to use hard-drive wobbles to monitor earthquakes.


“Once the electronics industry had determined how to control package contaminants, it became clear that other causes were also at work. James F. Ziegler led a program of work at IBM which culminated in the publication of a number of papers (Ziegler and Lanford, 1979) demonstrating that cosmic rays also could cause soft errors. Indeed, in modern devices, cosmic rays are the predominant cause. Many different particles can be present in cosmic rays, but the main cause of soft errors seems to be neutrons. Neutrons are uncharged and cannot disturb electron distribution on their own, but can undergo neutron capture by the nucleus of an atom in a chip, producing an unstable isotope which then causes a soft error when it decays producing an alpha particle.

Cosmic ray flux depends on altitude. Burying a system in a cave reduces the rate of cosmic-ray-induced soft errors to a negligible level. In the lower levels of the atmosphere, the flux increases by a factor of about 2.2 for every 1000 m (1.3 for every 1000 ft) increase in altitude above sea level. Computers operated on top of mountains, or in aircraft, experience an order of magnitude higher rate of soft errors compared to sea level. This is in contrast to package decay induced soft errors, which do not change with location. It happens that one isotope of boron, Boron-10, captures neutrons and undergoes alpha decay very efficiently. It has a very high neutron collision cross section. Boron is used in BPSG, a glass used to cover silicon dies to protect them. In critical designs, depleted boron– consisting almost entirely of Boron-11–is used, to avoid this effect and therefore to reduce the soft error rate. Boron-11 is a by-product of the nuclear industry.”

FULL PATENT,309,866.PN.&OS=PN/7,309,866&RS=PN/7,309,866

A cosmic ray detector includes a cantilever with a first tip. The detector also includes a second tip and circuitry to provide a signal indicative of a distance between the first and second tips being such as would be caused by a cosmic ray interaction event.

Inventors: Hannah; Eric C. (Pebble Beach, CA)
Assignee: Intel Corporation (Santa Clara, CA)
Appl. No.: 10/882,917 / Filed: June 30, 2004

“The normal background radiation environment on the surface of the earth has ionizing components that sometimes affects the reliability of semiconductor integrated circuit chips, such as memory chips used in computers. If an intruding particle is near a p-n junction in the chip, it may induce a soft error, or single-event upset which can cause signals to change voltage and, accordingly, bits of data to change voltage value. Excess electron-hole pairs may be generated in the wake of the penetrating particle. The field in the neighborhood of the p-n junction, if sufficiently strong, separates these electrons and holes before they recombine, and sweeps the excess carriers of the appropriate sign to a nearby device contact. A random signal may be registered if this collected charge exceeds a critical threshold value.

Cosmic particles in the form of neutrons or protons can collide randomly with silicon nuclei in the chip and fragment some of them, producing alpha-particles and other secondary particles, including the recoiling nucleus. These can travel in all directions with energies which can be quite high (though of course less than the incoming nucleon energy). Alpha-particle tracks so produced can sometimes extend a hundred microns through the silicon. The track of an ionizing particle may extend a fraction of a micron to many microns through the chip volume of interest, generating in its wake electron-hole pairs at a rate of one pair per 3.6-eV (electronvolts) loss of energy. A typical track might represent a million pairs of holes and electron.

Cosmic ray induced computer crashes have occurred and are expected to increase with frequency as devices (for example, transistors) decrease in size in chips. This problem is projected to become a major limiter of computer reliability in the next decade. Various approaches have been suggested to eliminate or reduce the number of soft errors due to cosmic ray interactions in chips. None of these approaches is completely successful, particularly as device size continues to decrease. Another approach is to accept that some soft errors will happen and to design memory and logic circuitry to include redundancy in all calculations. This approach involves more gates and enough spatial separation between contributing redundant elements to avoid mutual soft errors from the same cosmic ray. This approach is not practical for many chips.”

When ‘soft errors’ hit the desktop
BY Brian Prangle  /  Jul 28, 1998

Your system crashes unexpectedly. You are the victim of an event several light years ago in a distant galaxy which hurled a shower of protons and neutrons randomly into space, only to collide with electrons in your system’s RAM. Or, more insidiously, the radiation not crashing your system but subtly altering data without your knowing. Blow away a few electrons and a bit gets altered from a zero to a one with unforeseen consequences. If this bit merely represents the colour of a pixel co-ordinate 720,346 in your latest shoot ’em up game, then so what? If it represents the first digit on a six figure cheque, then you may want to correct it. This may not be as far-fetched as it sounds. These so-called ‘soft’ errors have been observed in integrated circuits for several decades. The new breed of PCs and servers with massive amounts of DRAM increase the possibility of such errors. IBM has long been vocal about soft errors: with a large population of mainframes processing huge quantities of critical data in massive memories they have built in ECC (Error Correcting Code) into memory to safeguard data. ECC memory detects bit errors a nd corrects them.

Mainframes cannot tolerate bit errors. The larger, more expensive workgroup Intel servers increasingly also come configured with ECC memory and motherboard chipsets. Soft errors at the desktop have barely entered the PC industry’s consciousness; DRAM manufacturers have concentrated on removing ‘hard’ errors produced by contaminants at the production stage. Some two years ago, IBM released the results of a survey they had conducted on the effect of cosmic rays on DRAM. Some 800 devices were tested in constant-read mode at sea level, in mountainous regions, and in caves. More soft errors were found at high altitudes, presumably because there are fewer air molecules at higher altitudes to absorb cosmic rays. But after three months, the underground DRAM tested at zero soft errors. IBM estimate that for every 256MB of memory you’ll get one soft error a month.

With Intel’s new chip sets allowing up to 4GB of DRAM per system, DRAM densities at the workgroup server level are approaching mainframe levels of yesteryear. IBM reckon that 1GB is the threshold beyond which robust error correction of mainframe class is necessary and Toshiba reckon that failure rate is directly proportional to the amount of DRAM in the system. As memory designs are set to change dramatically over the next year to increase memory density and throughput, the chances of soft error occurrence can only be set to rise. But convincing a sceptical industry that sub-atomic particles from outer space are a design problem is IBM’s biggest problem, closely followed by the ruthless price competition that makes purchasers and manufacturers reluctant to spend more on a soft err or problem.

A more tangible threat from outer space is facing global communication carriers that rely on satellites. In November the earth is set to pass through a particularly dense area of small meteorites known as the Leonids, which are the debris from the tail of a comet. Satellites and space debris are incompatible, with partial or catastrophic failure resulting from collision with even the smallest object. The regular visits of the space shuttle to repair satellites are witness to this phenomenon. We are becoming increasingly dependent on computer-driven communication, yet paradoxically each new performance boost makes us more vulnerable.

the worlds first freely available P2P warning system

[ the solution ]
The Tsunami-Harddisk-Detector utilizes your existing computer hardware to detect earthquakes, which can lead to tsunamis. It is a pure software solution, therefore it can be distributed free of charge. The computers (nodes) participating in the project connect to a P2P (Peer-to-Peer) network, thereby establishing a distributed computing platform with high reliability. A few of the participating computers act as supernodes, thereby performing data analysis on their attached nodes. In case of emergency, the supernodes inform their attached nodes instantly. Hence, if you decide to participate and install the client software, you will be automatically warned about potential tsunamis.

[ background ]
A tsunami can be generated by any disturbance that rapidly displaces a large mass of water, such as an earthquake, volcanic eruption, landslide or meteorite impact. However, the most common cause is an undersea earthquake. An earthquake which is too small to create a tsunami by itself may trigger an undersea landslide quite capable of generating a tsunami. Waves are formed as the displaced water mass moves under the influence of gravity to regain its equilibrium and radiates across the ocean like ripples on a pond.

If the initial event is sensed, the coupled system of partial differential equations can be utilized to simulate the propagation of tsunami waves and issue a tsunami warning, if necessary. The equations are known as the shallow water wave equations (Pelinovsky et al. 2001, Layton 2002). Here, u and v are the horizontal velocity components of the water surface, x and y are the spatial coordinates of the wave, t is elapsed time, g is the acceleration due to gravity, and h is the height of the wave above the ocean floor topography b. The critical problem is how to sense the initial earthquake. How the Tsunami-Harddisk-Detector copes with these problems is explained here.

[ how it works ]
Tusnamis are generated by earthquakes, volcanic eruptions or large meteorite impacts. These initial events cause seismic waves which can be sensed by the fragile components of computer harddisks. Seismic waves travel with about 5000 m/s (18.000 km/h), while tsunamis travel much slower with a speed of 200 m/s (720 km/h, depending on the local depth) through water. This gives time for a tsunami forecast to be made and warnings to be issued to threatened areas, if warranted.

Software client
The Tsunami Harddisk Detector is a small software client (see Fig. 1) installed on your computer which continuously monitors the vibration of the internal components of your harddisk (see Fig. 2). Since they are extremely fragile, they react to any accelerations of the computer, including those that originate from earthquakes. Different technical strategies are currently under investigation to analyze seismic activity. For best performance, the computer with its harddisk should be fastened to the ground.

One computer is not enough to identify the epicenter of an earthquake. However, a small number of networked computers can locate the epicenter, measure the intensity and estimate the risk of a tsunami. To this end, the computers are connected via a P2P network. To organize the communication within the network, it consists of many nodes (which perform sensing) and a few supernodes (which perform signal processing). In particular, the supernodes perform two tasks:
*     locate the epicenter based on the time lag and intensity of the event
*     remove ‘signal noise’. Noise is generated by events that shake the harddisk, but can not cause a tsunami (e.g. a user kicks his computer). The supernode can detect this noise because it is improbable that many users kick their computer simultaneously.

In order to be able to locate the epicenter, each node must know exactly where it stands on earth. Therefore, the longitude and latitude as well as the orientation must be entered when the software is started the first time. This data can be obtained from an attached GPS-mouse or from

Known problems
Although the described method works in principle, it has inherent problems: the internals of harddisks operate at very high speeds with frequencies of about 1 kHz, while earthquakes exhibit much smaller frequencies (see the seismogram to the right). Also the accelerations during normal harddisk operation are about 30 g, while an earthquake has a much smaller horizontal acceleration (for example the Kobe earthquake in 1995 had 0.84 g horizontally). Hence, the waves resulting from earthquakes must be filtered out from other waves.

[ usage ]
The application is developed in Java, hence it runs on any computing platform that supports at least Java 1.6. Sensing of seismic activity is very hardware related and is currently only supported on Windows.

Hard drive wobbles track earthquake spread
BY Tom Simonite  /  08 September 2006

Software that turns ordinary computer hard drives into makeshift quake sensors, and connects machines to form a quake-monitoring network, has been released for free on the web. Although experts say the technique is unlikely to replace standard scientific equipment, computer hard drives can provide rough estimates of the intensity and location of an earthquake and warn of an impending tsunami. Hard drives contain tiny vibration sensors that warn when a device is being shaken so that delicate components can be automatically locked in one place. A damaged drive led US technology consultant Michael Stadler to realise that these sensors might also be able to detect the seismic vibration caused by an earthquake. “A friend had their hard drive fail due to the vibration from nearby construction work,” Stadler told New Scientist. So he decided to write a piece of software, called the Tsunami Hard Disk Detector, that monitors hard drive vibration and links computers together in order to spot and map earthquakes.

Travelling wave
“A hard drive is not sensitive enough to detect as accurately as a seismometer,” says Stadler. “But in a peer-to-peer network you can use geographic patterns to identify the direction a seismic wave is travelling, and perhaps the epicentre.” Stadler’s software was made freely available online and has been downloaded around 2500 times so far, mostly by users in Asia. Users can enter the latitude and longitude of their machine and are automatically connected together to form a decentralised network.

If an earthquake occurs nearby, the network maps the strength and timing of vibrations to reveal the epicentre of the quake. The results of the analysis are then instantly available to any computer running the software. And irrelevant vibrations are discounted by only looking for signals that are picked up by several different machines. Information collected by a couple of hundred users successfully identified the earthquake that that struck the south coast of Java in July 2006 and which triggered a tsunami (see Java tsunami death toll over 300). But Stadler admits the system is imperfect. “The official strength was 7.7 on the Richter scale, the Hard Disk Detector said 5.9,” says Stadler. “There’s more work to do but it shows the concept can work.” He notes that the system has failed to detect several other earthquakes.

Global network
Suleyman Nalbant, a geophysicist at the University of Ulster in the UK, says seismometers dotted across the globe can easily pick up quakes like the one that hit Java. But if the hard drive system could detect smaller earthquakes “it could be very useful”, he says. “The global network is sometimes not perfect for smaller events.” In practice, however, problems with accuracy might prevent this, Nalbant says: “I’m not sure the sensors can really be that accurate”. Bruce Malamud, an expert on natural hazards at Kings College London, UK, is also sceptical. “It’s very novel,” he says. “But to make a real difference you’d probably need a very large number of computers using the software.” Stadler’s system was awarded a prize for innovative use of the internet at the Ars Electronica fair held in Linz, Austria, between 31 August and 5 September.


GPS shield’ will mean faster tsunami alerts  /  15 September 2007

A “GPS shield” that works in real time could save lives by quickly warning of potential tsunamis. The German-Indonesian Tsunami Early Warning System (GITEWS) is being developed by a team led by Jörn Lauterjung of the National Research Centre for Geosciences in Potsdam, Germany. Unlike a GPS method proposed last year, which detects seismic waves transmitted through the Earth’s crust to distant receivers, the new ground-based system takes real-time measurements of vertical ground motion – the type of fault movement more likely to produce tsunamis (Journal of Geophysical Research, DOI: 10.1029/2006JB004640). To protect the Indian Ocean region, the proposed shield would include an array of 18 GPS stations.

“The 2004 catastrophic Indian Ocean tsunami has strongly emphasized the need for reliable tsunami early warning systems. Another giant tsunamigenic earthquake may occur west of Sumatra, close to the large city of Padang. We demonstrate that the presence of islands between the trench and the Sumatran coast makes earthquake-induced tsunamis especially sensitive to slip distribution on the rupture plane as wave heights at Padang may differ by more than a factor of 5 for earthquakes having the same seismic moment (magnitude) and rupture zone geometry but different slip distribution.

Hence reliable
prediction of tsunami wave heights for Padang cannot be provided using traditional, earthquake-magnitude-based methods. We show, however, that such a prediction can be issued within 10 minutes of an earthquake by incorporating special types of near-field GPS arrays (“GPS-Shield”). These arrays measure both vertical and horizontal displacements and can resolve higher order features of the slip distribution on the fault than the seismic moment if placed above the rupture zone or are less than 100 km away of the rupture zone. Stations in the arrays are located as close as possible to the trench and are aligned perpendicular to the trench, i.e., parallel to the expected gradient of surface coseismic displacement. In the case of Sumatra and Java, the GPS-Shield arrays should be placed at Mentawai Islands, located between the trench and Sumatra and directly at the Sumatra and Java western coasts. We demonstrate that the “GPS-Shield” can also be applied to northern Chile, where giant earthquakes may also occur in the near future. Moreover, this concept may be applied globally to many other tsunamigenic active margins where the land is located above or close to seismogenic zones.”

GPS can help give early warning of tsunamis
BY Tom Simonite  /  30 June 2006

Using GPS (global positioning system) data to measure how points on land move following an undersea earthquake could help geologists decide if the tremor will cause an ocean-wide tsunami. Combined with existing tsunami warning systems, the data could speed up decisions about whether to issue an alert and avoid false alarms, say US researchers. Tsunami warning systems use a combination of seismometers to measure tremors and ocean buoys to spot pressure waves. But it is difficult to pinpoint the exact strength of an undersea quake. This can cause problems for those deciding whether to issue a warning, says Seth Stein, a geophysicist at Northwestern University in Illinois, US. “The hardest job is to distinguish quakes that are big from those that are dangerously big,” he told New Scientist. “Richter scale 8 is quite a big earthquake, but about 8.5 is the magic number. Above that, ocean-wide tsunamis start to happen.” GPS measurements of points around a quake could determine more quickly than current methods whether this threshold has been exceeded, he says.

Vague measurement
The 2004 Sumatran quake – which caused the Asian tsunami – was eventually measured at between 9.2 and 9.3, but seismometers can initially only determine whether a quake is larger than about 7. “It normally takes a couple of hours to know whether it was over 8.5 or not,” says Stein. “Limits on how much energy can be stored in rock mean the first body waves of a quake don’t get bigger, but just ring for longer.” Taking GPS measurements of points on land around a quake can answer this crucial question within 15 minutes, according to a study by Stein and co-workers from the University of Nevada, US. To prove the technique can work, they used GPS data recorded during the first 15 minutes of the 2004 Sumatran quake, which made it clear the tremor would go on to cause a devastating tsunami in the Indian Ocean.

Millimetre accuracy
Software developed at NASA’s Jet Propulsion Laboratory was used to measure the position of 38 GPS stations between 300 and 7500 kilometres from the quake’s epicentre northwest of Sumatra. Knowing how these stations moved to within 7 millimetres makes it possible to measure long-frequency waves from the quake and estimate its size. “Using that 15 minutes of data it was possible to say the quake was a 9.” This was very close to the 9.2 or 9.3 figure eventually determined for the quake, says Stein. “We think this would be a useful third component to the existing tsunami warning system. By taking out the guesswork it could make it more accurate and avoid false alarms.”

Geophysicist Paul Burton at the University of East Anglia, UK, agrees, but cautions that the system’s effectiveness will depend on the location of the earthquake. “It’s feasible but might not help in all circumstances.” If the GPS stations available are not located in the right place relative to the epicentre, satellite measurements may not be that helpful, he says. “Another consideration is whether high-tech systems for tsunami warnings will be here in three to four hundred years time,” says Burton. “Educating people right now and making sure the knowledge about what to do is kept alive can make a huge difference. For example, a 10-year-old British girl saved many lives during the Indian Ocean tsunami when she remembered the early signs of a tsunami from her geography lessons.”

{Geophysical Research Letters (DOI: 10.1029/2006GL026145)}

Kjell Henriksen Observatory  /  Svalbard 78oN
“KHO is now in full operation! Because of all the light sensitive instrumentation up there, we urge people not to visit us without an appointment. If you drive up by car – please use only your parking lights and shine as little light on the observatory as possible! Happy auroral season! :o)”

Solar flares cause GPS failures, possibly devastating for jets and distress calls
BY Thomas Oberst  /  Sept. 26, 2006

Strong solar flares cause Global Positioning System (GPS) receivers to fail, Cornell researchers have discovered. Because solar flares — larger-than-normal radiation “burps” by the sun — are generally unpredictable, such failures could be devastating for “safety-of-life” GPS operations — such as navigating passenger jets, stabilizing floating oil rigs and locating mobile phone distress calls. “If you’re driving to the beach using your car’s navigation system, you’ll be OK. If you’re on a commercial airplane in zero visibility weather, maybe not,” said Paul Kintner Jr., professor of electrical and computer engineering at Cornell and head of Cornell’s GPS Laboratory.

Alessandro Cerruti, a graduate student working for Kintner, accidentally discovered the effect on Sept. 7, 2005, while operating a GPS receiver at Arecibo Observatory in Puerto Rico, one of six Cornell Scintillation Monitor (SCINTMON) receivers. Cerruti was investigating irregularities in the plasma of the Earth’s ionosphere — a phenomenon unrelated to solar flares — when the flare occurred, causing the receiver’s signal to drop significantly. To be sure of the effect, Cerruti obtained data from other receivers operated by the Federal Aviation Administration (FAA) and the Brazilian Air Force. He found that all the receivers had suffered exactly the same degradation at the exact time of the flare regardless of the manufacturer. Furthermore, all receivers on the sunlit side of the Earth had been affected.

Cerruti will report on the findings Sept. 28 at the Institute of Navigation Meeting in Fort Worth, Texas, where he will receive the best student paper prize. The full results of the discovery will be published in a forthcoming issue of the journal Space Weather. The flare consisted of two events about 40 minutes apart: The first lasted 70 seconds and caused a 40 percent signal drop; the second lasted 15 minutes and caused a 50 percent drop. But this flare was moderate and short-lived; in 2011 and 2012, during the next solar maximum, flares are expected to be 10 times as intense and last much longer, causing signal drops of over 90 percent for several hours.

“Soon the FAA will require that every plane have a GPS receiver transmitting its position to air traffic controllers on the ground,” warned Cerruti. “But suppose one day you are on an aircraft and a solar radio burst occurs. There’s an outage, and the GPS receiver cannot produce a location. … It’s a nightmare situation. But now that we know the burst’s severity, we might be able to mitigate the problem.” The only solutions, suggested Kintner, are to equip receivers with weak signal-tracking algorithms or to increase the signal power from the satellites. Unfortunately, the former requires additional compromises to receiver design, and the latter requires a new satellite design that neither exists nor is planned. “I think the best remedy is to be aware of the problem and operate GPS systems with the knowledge that they may fail during a solar flare,” Kintner said.

The team was initially confused as to why the flare had caused the signal loss. Then Kintner recalled that solar flares are accompanied by solar radio bursts. Because the bursts occur over the same frequency bands at which GPS satellites transmit, receivers can become confused, leading to a loss of signal. Had the solar flare occurred during the night in Puerto Rico or had Cerruti been operating SCINTMON only at night, he would not have made the discovery. “We normally do observations only in the tropics and only at night because that’s where and when the most intense ionospheric irregularities occur,” said Kintner. However, since no one had done it before, Cerruti was looking at “mid-latitudes” (between the tropics and the poles), where weaker irregularities can occur both night and day. As a result, SCINTMON detected the solar flare.

Other authors of the forthcoming paper include D.E. Gary and L.J. Lanzerotti of the New Jersey Institute of Technology, E.R. de Paula of the Instituto Nacional de Pesquisas Espaciais and Cornell research associate Hien Vo.


“The neutral upper atmosphere, the Thermosphere, plays an important role in the characteristics of Earth’s ionosphere. Collisions between the high-density neutral atmosphere and ions at altitudes near 100 km above the Earth impede ion motion, creating electrical resistivity which impacts the overall electrical coupling between ionosphere and magnetosphere. This resistivity in the ionospheric E region is the electrical load for the disturbance currents generated in the interaction of the solar wind with our magnetosphere. Time histories of radar and optical observations are enabling a climatology of this region to be developed, and long-duration experiments are being mounted to address the wave modes which couple energy through the I-T interaction region.”

Space Physicists and Atmospheric Scientists Can Now Predict Disruptions Caused by the Sun’s Coronal Mass Ejections / March 1, 2006

Solar activity can wreak havoc in communications systems — particularly during coronal mass ejections, when plumes of electrically charged particles hit earth’s atmosphere. Scientists can now track the plumes down to the single affected cities, helping to predict disruptions. John Foster, a space physicist at the Massachusetts Institute of Technology’s Haystack Observatory in Westford, Mass., says, “This material flies through inner-stellar space and impacts the Earth like a solar hammer hitting the Earth’s magnetic field.” This solar hammer can cause communication disruptions on the ground and a plume of electrically charged particles high in the earth’s atmosphere.

Now, atmospheric scientists at MIT may have discovered a way to predict space weather disruptions by identifying these plumes over the United States. “What we are seeing is a pattern in where these plumes are forming,” says Anthea Coster, an atmospheric research scientist at MIT Haystack Observatory. Scientists hope to detect these patterns with the ISIS instrument. ISIS picks up radio signals and measures plume movement. Then, a supercomputer processes this data, which will alert scientists where the plumes occur, pinpointing down to the state — even city — that will be affected. Foster says, “Predicting these would be a great benefit to any systems users, people who really rely on communications or navigation systems. Military operations, for one, would very much like to know what the space weather conditions would be like tomorrow.” Scientists say in the near future ISIS instruments will be distributed throughout the United States.

BACKGROUND: Bursts of matter from the sun, called coronal mass ejections (CMEs), have long been known to affect cell phone reception, TV and radio signals, and how much radiation exposure we receive while flying in the upper atmosphere. Now, researchers have detected plumes that tell them where the radiation form the ejection is concentrated and what places will be influenced the most by the CME.

CME or SOLAR FLARE?: People sometimes confuse CMEs with solar flares, but they are different phenomena. Solar flares are explosions on the sun that occur when energy build up around sunspots, becoming so hot — millions of degrees Fahrenheit — that they produce a burst of electromagnetic radiation across the entire electromagnetic spectrum, from radio waves to x-rays and gamma rays. CMEs were once thought to be the result of solar flares, but while they sometimes accompany solar flares, there is no direct relation between the two. They occur when a large bubble of plasma escapes through a star’s corona and travels through space to the earth at high speeds over the course of several hours. If a CME collides with the earth, it can produce a geomagnetic storm, which can cause electrical power outages and damage communications satellites and electronic equipment. Solar flares, on the other hand, affect radio communications.

PLASMAS: A plasma is essentially electrically charged (ionized) gas, consisting of free-moving electrons and ions (atoms that have lost electrons). Applying a surge of energy — with a laser, for example — knocks electrons off gas atoms, turning them into ions and creating a plasma. Unless this energy is sustained, however, plasmas will recombine back into a neutral gas. On earth, we are familiar with the ordinary states of matter: solids, liquids and gases. But in the universe at large, plasma is by far the most common form. Plasma in the stars and the space between them makes up 99 percent of the visible universe.
Solar bursts may threaten GPS / April 5, 2007

The Global Positioning System, relied on for everything from navigating cars and airplanes to transferring money between banks, may be threatened by powerful solar flares, a panel of scientists warned Wednesday. “Our increasingly technologically dependent society is becoming increasingly vulnerable to space weather,” David L. Johnson, director of the National Weather Service, said at a briefing. GPS receivers have become widely used in recent years, using satellite signals in navigating airplanes, ships and automobiles, and in using cell phones, mining, surveying and many other commercial uses. Indeed, banks use the system to synchronize money transfers, “so space weather can affect all of us, right down to our wallet,” said Anthea J. Coster, an atmospheric scientist at the Haystack Observatory of the Massachusetts Institute of Technology. The cause for their concern, Johnson said, was an unexpected solar radio burst on December 6 that affected virtually every GPS receiver on the lighted half of Earth. Some receivers had a reduction in accuracy while others completely lost the ability to determine position, he said.

Solar activity rises and falls in 11-year cycles, with the next peak expected in 2011. If that increasing level of activity produces more such radio bursts the GPS system could be seriously affected, the researchers said. And protecting the system is no simple task, added Paul M. Kintner Jr., a professor of electrical engineering at Cornell University, who monitored the December event. There are two possible ways to shield the system, he said, both very expensive. Either alter all GPS antennas to screen out solar signals or replace all of the GPS satellites with ones that broadcast a stronger signal. That’s why it’s essential to learn more about the sun’s behavior quickly in an effort to find ways to predict such events, the researchers said. In addition to the GPS system, the December solar flare affected satellites and induced unexpected currents in the electrical grid, Johnson said. “The effects were more profound than we expected and more widespread than we expected,” added Kintner.

Dale E. Gary, chairman of the physics department of the New Jersey Institute of Technology, said the burst produced 10 times more radio noise than any burst previously recorded. The difference between that burst and normal solar radio emissions “was like the difference between the noise level of a normal conversation and the noise level in the front row of a rock concert,” he said. “This is a wake-up call” to improve technology, commented Anthony J. Mannucci, group supervisor at NASA’s Jet Propulsion Laboratory. Patricia H. Doherty, co-director of the Institute for Scientific Research at Boston College, said the burst affected but did not shut down the Federal Aviation Administration’s Wide Area Augmentation System, which uses GPS signals to assist in navigation. Most of the WAAS ground stations were able to maintain contact with enough satellites to continue working, though their accuracy was somewhat affected, she said.

The stations have to maintain contact with at least four satellites to work, but usually monitor at least 10 to increase their accuracy, she said. Most were able to meet the minimum, she said. The briefing came at a Space Weather Enterprise Forum convened by the National Oceanic and Atmospheric Administration to discuss the effects of solar activity. Because of its increasing importance, Johnson said, the Weather Service’s Space Environment Center was converted from a mainly research center in 2005 to an operational center reporting on solar activity and its impacts.

April 13, 2007 News Release

“Intelsat General Corp announced Wednesday that it has been selected for an industry-government collaboration to demonstrate the viability of conducting military communications through an Internet router in space. The Department of Defense project to test Internet routing in space (IRIS) will be managed by Intelsat General, and the payload will convert to commercial use once testing has been completed. The IRIS project is one of seven projects funded and announced in fiscal 2007 as a Joint Capability Technology Demonstration (JCTD) by the Department of Defense. The IRIS JCTD is a three-year program that allows the DoD to collaborate with Intelsat General and its industry team to demonstrate and assess the utility of the IRIS capability. Cisco will provide commercial IP networking software for the on-board router. In addition, SEAKR Engineering will manufacture the space-hardened router and integrate it into the IRIS payload. “IRIS extends the Internet into space, integrating satellite systems and the ground infrastructure for warfighters, first responders and others who need seamless and instant communications,” said Bill Shernit, President and CEO of Intelsat General. “IRIS will enable U.S. and allied military forces with diverse satellite equipment to seamlessly communicate over the Internet from the most remote regions of the world.”

The satellite scheduled to carry the IRIS payload, IS-14, is set for launch in the first quarter of 2009. It will be placed in geostationary orbit at 45 degrees West longitude with coverage of Europe, Africa and the Americas. Representing the next generation of space-based communications, IRIS will serve as a computer processor in the sky, merging communications being received on various frequency bands and transmitting them to multiple users based on data instructions embedded in the uplink. The IRIS payload will support network services for voice, video and data communications, enabling military units or allied forces to communicate with one another using Internet protocol and existing ground equipment. The IRIS payload will interconnect one C-band and two Ku-band coverage areas. The IRIS architecture and design allow for flexible IP packet (layer 3) routing or multicast distribution that can be reconfigured on demand. With the on-board processor routing the up and down communications links, the IRIS payload is expected to enhance satellite performance and reduce signal degradation from atmospheric conditions. “The IRIS architecture allows direct IP routing over satellite, eliminating the need for routing via a ground-based teleport, thereby dramatically increasing the efficiency and flexibility of the satellite communications link,” said Don Brown, Vice President of Hosted Payload Programs for Intelsat General. “IRIS is to the future of satellite-based communications what ARPANET was to the creation of the Internet in the 1960s.” The Defense Information Systems Agency (DISA) will have overall responsibility for coordinating use of the IRIS technology among the government user community and for developing means of leveraging the IRIS capability once the satellite is in space.”


US military security still poor after ‘biggest’ hack
BY John Leyden  / 13th April 2007

Accused Pentagon hacker Gary McKinnon is continuing to fight against extradition to the US after losing an appeal last week. Only the Law Lords now stand between the Scot and a US trial for allegedly breaking into and damaging 97 US government computers between 2001 and 2002 and causing $700,000 worth of damage, in what US authorities have described as the “biggest military” computer hack ever. He allegedly infiltrated networks run by the US Army, US Navy, US Air Force, Department of Defense and NASA. US authorities described McKinnon as an uber-hacker who posed a threat to national security in the aftermath of the 9/11 attack. McKinnon (AKA Solo) admits he infiltrated computer systems without permission. The 41-year-old former sysadmin said he gained access to military networks – using a Perl script to search for default passwords – but describes himself as a bumbling amateur motivated by curiosity about evidence of UFOs. He said numerous other hackers had access to the resources he was using and questions why the US authorities have singled him out for extradition. Any damage he did was purely accidental, McKinnon claims. If convicted, following extradition and a US trial, McKinnon faces a jail term of up to 45 years’ imprisonment.

According to a reformed computer hacker accused of similar crimes 10 years ago, McKinnon is been made a scapegoat for the shortcomings of US military security. Mathew Bevan, whose hacker handle is Kuji, was accused of breaking into US military computer systems but his 1997 case at Woolwich Crown Court was dropped after a legal battle lasting around 18 months. No attempt was made to extradite Bevan. After the case, Bevan became an ethical hacker and security consultant, first with Tiger Computer Security, and later on a freelance basis with his firm the Kuji Media Corporation. “Both Gary and I were accused of similar offences. The difference is his alleged crimes were committed in a different political climate, post 9-11. The decision to push extradition in Gary’s case is political,” Bevan told El Reg.

Bevan, like McKinnon, has an interest in free energy and evidence of UFOs. The similarities in the case go further. The crimes Bevan is alleged to have committed were cited as evidence of cyberterrorism in US senate hearings in 1996. “They haven’t found a cyberterrorist or ‘bad boy’ for a while and it looks like they are trying to make an example in Gary’s case,” he said. McKinnon should have been allowed to plead guilty in his own country and not be faced with the prospect of a long prison term in a US prison with “inhumane” conditions, Bevan argues. He says the military systems McKinnon is accused of hacking remain vulnerable to attack. “I’m sure there are a lot of people on these machines, some of who the US authorities allow to get in. The prosecution against Gary is about saving face for security lapses by the US military that remain as bad as they were 10 years ago,” Bevan said. “If this had happened with a corporation someone would have been sacked.” He added that US authorities are keen to talk up the cyberterrorism threat in order to protect information security budgets.

McKinnon, unlike a US citizen who faced similar charges, is in a particularly bad situation. “The authorities are trying to rip him away from his family and ruin his life. Gary committed his alleged offences in the UK, and according to the Computer Misuse Act, jurisdiction lies here. “Gary has suffered trial by media over the last five years, with everything weighed against him. In the UK the prosecution has to establish a trail of evidence. Unlike the US, hearsay evidence isn’t allowed in Britain,” Bevan said. Despite everything that’s happened to McKinnon, he reckons the case will fail to act as much of a deterrent to other would-be hackers. “Has it scared anyone? I shouldn’t think so,” Bevan said.

Final appeal
Lawyers for McKinnon are petitioning for leave to appeal to the House of Lords on grounds including the use of “deliberately coercive plea bargaining” tactics by US authorities during the course of the long running case. His lawyers argued that he had been subjected to “improper threats” that he would receive a much harsher sentence and be denied the opportunity to serve out the back-end of his jail term in the UK unless he played ball. Appeal court judges Lord Justice Maurice Kay and Mr Justice Goldring criticised US prosecution tactics but said these didn’t offer enough grounds for appeal against the Home Secretary’s decision to confirm a 2006 ruling that McKinnon ought to be extradited to the US.

The unemployed sysadmin has had these charges over his head since March 2002 when he was arrested by officers from the UK’s National High Tech Crime Unit. The case against him lay dormant until July 2005 when extradition proceedings commenced. McKinnon has suffered ill health over recent months as a result of the stress caused by the case, according to his lawyers. McKinnon’s supporters argue the case has wider political implications. “It is not just about Gary McKinnon, there are lots of other people, from computer hackers to legitimate businessmen, who will continue to fall foul of this sort of surrender of British sovereignty and obeisance before the extra- territorial demands of the US legal bureaucracy,” Mark, a member of London 2600 who runs the Free Gary blog, told us. “However the same lack of a requirement to show prima facie evidence also applies to European Union countries under the European Arrest Warrant,” he adds.

McKinnon’s lawyers chose not argue about whether he might be put on trial before a military tribunal but that this may well be argued in the House of Lords if leave to appeal (which is by no means guaranteed) is granted. “Basically the judges have said ‘we have to trust the USA Government to act in good faith’, until they show that they have broken their promises – which will by then, of course, be too late for Gary McKinnon. Unlike Babar Ahmad or even any of the British citizens who were held without trial at Guantanamo Bay, Gary is actually accused of directly ‘attacking the US military’ systems,” Mark notes. “Even if Gary faces a civilian court in the USA, his chances of being found not guilty or of getting a lenient sentence appear to be slim, given the prosecutions recommendations as to length of sentence.” But the whole effort to try McKinnon in the US might backfire on the US military by putting its security shortcomings under the spotlight. “If there is an actual trial in the USA, rather than a coerced or otherwise ‘plea bargain’, there are a large number of senior US military officers and civilian IT managers and auditors who are going to have to explain the incompetence or possible corruption or perhaps treason, which went on for years and months under their command, both before and after September 11,” Mark claims. “Even if this is suppressed in court, it might lead to Congressional
Committee hearings,” he adds.

The biggest solar proton storm in 15 years erupted last week. NASA researchers discuss what it might have done to someone on the Moon.
Sickening Solar Flares / January 27, 2005

NASA is returning to the Moon–not just robots, but people. In the decades ahead we can expect to see habitats, greenhouses and power stations up there. Astronauts will be out among the moondust and craters, exploring, prospecting, building. Last week, though, there were no humans walking around on the Moon. Good thing. On January 20th, 2005, a giant sunspot named “NOAA 720” exploded. The blast sparked an X-class solar flare, the most powerful kind, and hurled a billion-ton cloud of electrified gas (a “coronal mass ejection”) into space. Solar protons accelerated to nearly light speed by the explosion reached the Earth-Moon system minutes after the flare–the beginning of a days-long “proton storm.” Here on Earth, no one suffered. Our planet’s thick atmosphere and magnetic field protects us from protons and other forms of solar radiation. In fact, the storm was good. When the plodding coronal mass ejection arrived 36 hours later and hit Earth’s magnetic field, sky watchers in Europe saw the brightest and prettiest auroras in years.

The Moon is a different story. “The Moon is totally exposed to solar flares,” explains solar physicist David Hathaway of the Marshall Space Flight Center. “It has no atmosphere or magnetic field to deflect radiation.” Protons rushing at the Moon simply hit the ground–or whoever might be walking around outside. The Jan. 20th proton storm was by some measures the biggest since 1989. It was particularly rich in high-speed protons packing more than 100 million electron volts (100 MeV) of energy. Such protons can burrow through 11 centimeters of water. A thin-skinned spacesuit would have offered little resistance. “An astronaut caught outside when the storm hit would’ve gotten sick,” says Francis Cucinotta, NASA’s radiation health officer at the Johnson Space Center. At first, he’d feel fine, but a few days later symptoms of radiation sickness would appear: vomiting, fatigue, low blood counts. These symptoms might persist for days.

Astronauts on the International Space Station (ISS), by the way, were safe. The ISS is heavily shielded, plus the station orbits Earth inside our planet’s protective magnetic field. “The crew probably absorbed no more than 1 rem,” says Cucinotta. One rem, short for Roentgen Equivalent Man, is the radiation dose that causes the same injury to human tissue as 1 roentgen of x-rays. A typical diagnostic CAT scan, the kind you might get to check for tumors, delivers about 1 rem [ref]. So for the crew of the ISS, the Jan. 20th proton storm was no worse than a trip to the doctor on Earth. On the Moon, Cucinotta estimates, an astronaut protected by no more than a space suit would have absorbed about 50 rem of ionizing radiation. That’s enough to cause radiation sickness. “But it would not have been fatal,” he adds. To die, you’d need to absorb, suddenly, 300 rem or more. The key word is suddenly. You can get 300 rem spread out over a number of days or weeks with little effect. Spreading the dose gives the body time to repair and replace its own damaged cells. But if that 300 rem comes all at once … “we estimate that 50% of people exposed would die within 60 days without medical care,” says Cucinotta. Such doses from a solar flare are possible. To wit: the legendary solar storm of August 1972.

It’s legendary (at NASA) because it happened during the Apollo program when astronauts were going back and forth to the Moon regularly. At the time, the crew of Apollo 16 had just returned to Earth in April while the crew of Apollo 17 was preparing for a moon-landing in December. Luckily, everyone was safely on Earth when the sun went haywire. “A large sunspot appeared on August 2, 1972, and for the next 10 days it erupted again and again,” recalls Hathaway. The spate of explosions caused, “a proton storm much worse than the one we’ve just experienced,” adds Cucinotta. Researchers have been studying it ever since. Cucinotta estimates that a moonwalker caught in the August 1972 storm might have absorbed 400 rem. Deadly? “Not necessarily,” he says. A quick trip back to Earth for medical care could have saved the hypothetical astronaut’s life.

Surely, though, no astronaut is going to walk around on the Moon when there’s a giant sunspot threatening to explode. “They’re going to stay inside their spaceship (or habitat),” says Cucinotta. An Apollo command module with its aluminum hull would have attenuated the 1972 storm from 400 rem to less than 35 rem at the astronaut’s blood-forming organs. That’s the difference between needing a bone marrow transplant … or just a headache pill. Modern spaceships are even safer. “We measure the shielding of our ships in units of areal density–or grams per centimeter-squared,” says Cucinotta. Big numbers, which represent thick hulls, are better: The hull of an Apollo command module rated 7 to 8 g/cm2. A modern space shuttle has 10 to 11 g/cm2. The hull of the ISS, in its most heavily shielded areas, has 15 g/cm2. Future moonbases will have storm shelters made of polyethelene and aluminum possibly exceeding 20 g/cm2. A typical space suit, meanwhile, has only 0.25 g/cm2, offering little protection. “That’s why you want to be indoors when the proton storm hits,” says Cucinotta. But the Moon beckons and when explorers get there they’re not going to want to stay indoors. A simple precaution: Like explorers on Earth, they can check the weather forecast–the space weather forecast. Are there any big ‘spots on the sun? What’s the chance of a proton storm? Is a coronal mass ejection coming? All clear? It’s time to step out.

From the archive, originally posted by: [ spectre ]

“Weather modification is also commonly known as cloud seeding, cloud
modification, atmospheric resource management, and precipitation
management. Weather Modification, Inc. specializes and excels in all
aspects of this water management technology.

Specifically, we offer a complete range of services from turn-key
operational programs for rainfall increase (rain enhancement), snow
pack augmentation, hail damage mitigation (hail suppression), and fog
clearing (fog dissipation), to technical assistance and/or technology
transfer for all of these.

In addition, we can provide complete weather radar services, including
interfaces with TITAN full-sky radar data archival software, a
complete line of proven and FAA-approved seeding equipment, seeding
aircraft, atmospheric research instrumentation, and aircraft
modification for these purposes.

We have been conducting weather operations and research since 1961,
and constantly strive to improve all aspects of these atmospheric
water management tools. We invite visitors to our Fargo, North Dakota
facilities. Just e-mail us at, and we will be
pleased to answer any questions you may have.”


U.S. EFFORTS,,1999967,00.html

US answer to global warming: smoke and giant space mirrors
Washington urges scientists to develop ways to reflect sunlight as

David Adam, environment correspondent
Saturday January 27, 2007 / The Guardian

The US government wants the world’s scientists to develop technology
to block sunlight as a last-ditch way to halt global warming, the
Guardian has learned. It says research into techniques such as giant
mirrors in space or reflective dust pumped into the atmosphere would
be “important insurance” against rising emissions, and has lobbied for
such a strategy to be recommended by a major UN report on climate
change, the first part of which will be published on Friday.

The US has also attempted to steer the UN report, prepared by the
Intergovernmental Panel on Climate Change (IPCC), away from
conclusions that would support a new worldwide climate treaty based on
binding targets to reduce emissions – as sought by Tony Blair. It has
demanded a draft of the report be changed to emphasise the benefits of
voluntary agreements and to include criticisms of the Kyoto Protocol,
the existing treaty which the US administration opposes.

The final IPCC report, written by experts from across the world, will
underpin international negotiations to devise a new emissions treaty
to succeed Kyoto, the first phase of which expires in 2012. World
governments were given a draft of the report last year and invited to

The US response, a copy of which has been obtained by the Guardian,
says the idea of interfering with sunlight should be included in the
summary for policymakers, the prominent chapter at the front of each
IPCC report. It says: “Modifying solar radiance may be an important
strategy if mitigation of emissions fails. Doing the R&D to estimate
the consequences of applying such a strategy is important insurance
that should be taken out. This is a very important possibility that
should be considered.”

Scientists have previously estimated that reflecting less than 1% of
sunlight back into space could compensate for the warming generated by
all greenhouse gases emitted since the industrial revolution. Possible
techniques include putting a giant screen into orbit, thousands of
tiny, shiny balloons, or microscopic sulphate droplets pumped into the
high atmosphere to mimic the cooling effects of a volcanic eruption.
The IPCC draft said such ideas were “speculative, uncosted and with
potential unknown side-effects”.

The US submission is based on the views of dozens of government
officials and is accompanied by a letter signed by Harlan Watson,
senior climate negotiator at the US state department. It complains the
IPCC draft report is “Kyoto-centric” and it wants to include the work
of economists who have reported “the degree to which the Kyoto
framework is found wanting”. It takes issue with a statement that “one
weakness of the [Kyoto] protocol, however, is its non-ratificiation by
some significant greenhouse gas emitters” and asks: “Is this the only
weakness worth mentioning? Are there others?”

It also insists the wording on the ineffectiveness of voluntary
agreements be altered to include “a number of them have had
significant impacts” and complains that overall “the report tends to
overstate or focus on the negative effects of climate change.” It also
wants more emphasis on responsibilities of the developing world.

The IPCC report is made up of three sections. The first, on the
science of climate change, will be launched on Friday. Sections on the
impact and mitigation of climate change – in which the US wants to
include references to the sun-blocking technology – will follow later
this year.

The likely contents of the report have been an open secret since the
Bush administration posted its draft copy on the internet in April.
Next week’s science report will say there is a 90% chance that human
activity is warming the planet, and that global average temperatures
will rise another 1.5C to 5.8C this century depending on emissions.
The US response shows it accepts these statements, but it disagrees
with a more tentative conclusion that rising temperatures have made
hurricanes more powerful.


Friday, September 23, 2005

Idaho weatherman quits, says he wants to pursue hurricane theory

IDAHO FALLS, Idaho – A Pocatello weatherman who gained attention for
an unusual theory that Hurricane Katrina was caused by the Japanese
mafia using a Russian electromagnetic generator has quit the
television station.

Scott Stevens’ last appearance on KPVI-TV was Thursday.

His departure comes after station officials learned a link labeled
“Make a Donation” on Stevens’ Web site,, where he
expounds on his theory, opened a payment form connected to Stevens’
KPVI e-mail address.

Still, station manager Bill Fouch, who’d told Stevens he should keep
his views separate from his TV role, insisted his former employee
wasn’t forced out.

“Scott advised me several months ago that he wouldn’t renew his
contract so he could devote full time to this,” Fouch said. “He wants
to get right at it.”

Stevens believes a little-known oversight in physical laws makes it
possible to create and control storms using a Cold War-era weapon
allegedly made by the Russians in 1976. The nine-year KPVI weatherman
said he’s received 120,000 hits on his Web site in two days, now gets
about 100 e-mails a day and has 15 radio bookings in the next five

“I needed more time to do everything that’s been put in front of me,”
said Stevens, 39. “I have not been able to dedicate the 40 hours a
week to this place.”

Earlier this week, scientists told the Idaho Falls Post Register the
theory was bogus.

“It’s laughable to think it (Hurricane Katrina) could have been
manmade,” said Rob Young, a hurricane expert at Western Carolina
University in Cullowhee, N.C.

From the archive, originally posted by: [ spectre ]



A village in the shadow of the Italian Alps that sees no sun for three
months of the year is installing a giant mirror on a mountainside to
reflect sunshine into the town square.

A tailor-made sheet of steel will be positioned this week on a nearby
peak at a height of 1,100 metres to direct sunlight down to the tiny
hamlet of Viganella, in the narrow Antrona valley, north of Turin.

Viganella, with a population of only 197, suffers from a complete lack
of direct sun from Nov. 11 to Feb. 2.

Mayor Pierfranco Midali presented the project for approval in January
of this year to help lift his community out of darkness during those 83
days a year.

Computer controlled monitors will enable the eight-by-six-metre sheet
of metal to follow the path of the sun and reflect its rays down to a
250-square-metre area in the village square for at least six hours a

The project cost of $151,000 has been covered by the local governments
and a private bank.

From the archive, originally posted by: [ spectre ]


“The closest I’ve been to a tornado is another hard one to judge. I’d
say 1/8 mile once for sure and less than 1 mile several times. The
Hanover KS tornado from April 6, 2006 was pretty much overhead with the
vortex at the surface probably 1/8 mile south. The May 10, 2003
Hannibal MO tornado and May 16, 2004 Chambers ones were both pretty
close as well. The largest tornado I’ve seen was probably the June 9,
2003 O’Neill NE tornado. It had a damage path 1/3 mile wide. Batlett NE
July 12, 2004 was also a big tornado, but probably around 1/4 mile
wide. I’m not a fan of F-Scale ratings in regards to what one has seen,
but since I’ve been asked, no I’ve not seen an F5. I don’t think
anything I’ve seen was rated F4. A few were rated F3. Those are damage
scales so they are only useful for strength if a tornado hits
something. In other words an “F5” strength tornado can go through a
cornfield and only be rated F0. The strongest looking tornado I’ve seen
was probably Hill City, June 9, 2005 early on in its life. It had near
violent motions in it for a bit. Hannibal looked pretty strong before
it got to that city.

Training/Safety: This is always an interesting topic to me. A large
majority of the chase community will pound away at this, as I guess
they should. If you want to chase you should learn to understand storm
structure and what is going on. A great way to do that is to read chase
accounts(if you can find many that have a lot of pictures, as well as a
lot of words). I guess I should say that is a decent way, not a great
way. I’d say I learned the most from a combination of good chase videos
and from just going out and chasing. Sure you might want to learn
BEFORE you go chase, but you can only get so much from reading and
watching. Sooner or later you have to go and learn first hand. If you
can find someone to go with, go for it. I wouldn’t expect that any
veteran chaser owes you that however. The way gas prices are now I’m
guessing it wouldn’t be that hard to track down someone willing to show
you the ropes if you split gas with them. Storm is a pretty
good resource to find and contact chasers.

One would think chasing tornadoes and severe storms would be sooo
dangerous. It really couldn’t be any less true. All the driving and the
other people on the road is by far the biggest danger to it. Even if
you wanted to drive right into a tornado, you are going to have a hard
time pulling it off(perhaps after years of chasing you’d get better at
having that option available). Tornadoes are rare and aren’t sticking
to the highways one drives on. If you aren’t trying to kill yourself
you will have a hard time doing so chasing. I’ll take them one by one.

Hail. If you stay in your car you’ll be rather safe from hail. In the
worst case scenario you’ll find some 6 inch diameter hail. Cover your
head and stay in the car and I’m guessing you’ll come out of it alive.
I’ve chased since 1999 and have only encountered up to baseballs and I
was trying to find them on those occasions. The larger they are the
less of them there are going to be as well. They get more and more
sporadic as they get bigger–though a very rare amount of storms can
really spit out some very large stones in large numbers. The biggest
hazard will be flying glass. Cover your eyes. Once you have any idea of
what you are doing you can pretty much avoid all hail if you want to.
Then your chances of getting cored by “killer” hail will be excessively
remote. Something very bad very well could happen, but using any common
sense while staying in the car I’m sure will get you out of it
alive….VERY alive, lol. I wonder if anyone, anywhere, has ever been
killed by hail inside a car. I doubt it. So you’ll have a hard time
getting seriously injured by hail if you just stay in your car…and
that is if you even found that truly nasty stuff…which ain’t easy,
even if you are trying to.

Wind. Wind is only as dangerous as you want it to be. If it gets high
stop moving and pretty much any serious danger it poses goes away. It
is very rare to find severe storms with winds over 100 mph. 100 mph
won’t even begin to roll your car over, even if it hits you on the
side. Winds under 100 mph can certainly knock over a big tree and kill
you in a hurry though. Don’t stop near big trees. It’s all very obvious
stuff, which is why if you use any common sense, storm chasing is going
to be a VERY safe thing.

Lightning. Do yourself a big favor and just stay in the car. Sure a
bolt can go through your car and still get you, but it will have a
harder time doing that(rather than going around the frame) than it
would if there was no car frame around you in the first place. This as
a hazard can be brought to near zero by simply being smart about it and
staying in your car. So, so far, lightning, wind, and hail are really
minimal threats if you stay in the car and don’t park by 100 foot

Flooding. Hmmm, I can’t think of ever being in danger from flooding
during all my chases. It’s another thing you can be smart about and
take the threat of it to near zero if you want. If you are chasing and
one area is getting a crapload of rain then maybe you don’t want to
take that exact route back home. I’ll go east for a while then north
when things like this happen.

Tornadoes. Just by their pure rarity you’ll have a hard time finding
one to hurt you. Now add to this rarity the fact they are only covering
one small spot of land and the odds of you being in that are just slim.
Now, if you know what you are doing, this can be an excessively minimal
threat as well. Anything can happen, I will say that, but it’s just a
highly unlikely thing. I’m more affraid of being hit by lightning than
I am a tornado.

So, I guess if you just start out chasing it is entirely possible to
get yourself hurt…though even then it’s going to be pretty hard. If
you chase for a while the hobby becomes an extremely safe one…if you
want it to be. I don’t think of this hobby as dangerous at all, as
goofy as that might seem. A deer coming through my windshield while
driving home at night scares me much more than anything from the

Equipment FAQ
Camera Equipment: All I have are 2 digital rebels(300D and the 350D/xt)
for the camera. For the lenses I only have 2, a canon 50 mm F/1.8 and a
canon 17-40L. 2002-2003 everything was shot with a Sony DSC F707.
2004-2005 everything was shot with the first digital rebel, the
300D(2004 I only had the kit lens….in 2005 I replaced it with the
17-40L). 2006 on I’m using the new canon digital rebel xt.

Video Equipment: 96-1999 I used a c-vhs…..I think it was a canon but
maybe not. 2000-2001 I used a sony Hi-8. 2002-2003 I used a sony
digital 8(trv-340). 2004-2005 I used a sony mini-dv(trv-19). 2006 on
I’m using a Sony HC1 HDV cam.

Equipment used for data/chasing: 99-2001 NOAA weather radio was it.
2002-2004 I started using a cell phone connected to a laptop for data
in digital areas where I could get it. I also got delorme street atlas
with GPS in 2003(I think it was 2003). GPS is such a useful tool when
chasing. I used libraries a good deal in 2004 along with the cell
phone(but half the chases it wouldn’t work). In 2005 I bought xm
wx-worx. It is a wonderful tool for radar and surface data as you can
get the data anywhere and everywhere since it is via satellite. It
wasn’t that cheap however at about $900 for the hardware and software
alone. Also in 2005 I started using WI-FI. I’m almost thinking had I
used WI-FI before buying the wx-worx reciever I might not have bothered
with it. WI-FI takes the place of libraries and can be found about
anywhere now. With WI-FI and my xm wx-worx I never use my cell phone
anymore. That is all the stuff I have for chasing. I don’t use my
scanner for NOAA at all now since it is pretty useless–that and I lost
an antenna the chase after I had replaced the one I had just lost. So
for equipment my car is very bare. Other than hail dents one wouldn’t
know I was a chaser if they saw me stopped on a road somewhere since
there really isn’t a single antenna now(other than the one for xm which
is a tiny magnet you don’t really see).

Mounts: I have a suction cup mount for the dash that I rarely used. Now
that I have this new cam it won’t work on my dash. What I do have that
I use all the time are 2 window clamp mounts. I NEVER use a tripod when
chasing now. These window clamps are extremely useful for chasing. You
just stop with your window facing the way you want and slap the cams on
there. I’m a chicken of lightning anyway so this really helps out. It’s
also much quicker than getting out and setting up a tripod so you’ll do
it more often. They are only like $25 a piece so there’s really no
reason not to get one or two. Since I chase alone and am doing video
AND stills I have to cut corners and do what I can to speed things up.

Editing software: For photos I use Adobe Photoshop CS. I shoot strictly
in RAW format now, though I don’t do a whole lot to the photos during
RAW conversion. For video I use Adobe Premiere Pro 1.5. For the dvd
creation I use TMPGEnc products for encoding and authoring(pretty hard
to beat the quality at the price).

Severe Weather FAQ/Talk:

Cool skies: I’ve been asked a good question about the look of the
storms on this site and why people don’t see them look like this where
they live. There are a few factors to this. First and most important is
likely just the region of the US one lives in. Big severe storms need
good moisture/juice and in this area that comes from the gulf of
Mexico. If you are west of the Rocky Mountains this good moisture
really doesn’t get over there. The Pacific Ocean waters are pretty cool
up and down the west coast compared to the gulf of Mexico waters. For
severe weather you’ll want warm moisture near the surface. You can get
“too much” moisture if you get too far east of the Rockies. When there
is “too much” moisture you won’t be able to see the features of the
storms. For severe weather you want the mid and upper-levels of the
atmosphere to be very dry. It seems to me this is the best just east of
the Rockies right when storm systems move out onto the plains. They’ll
come out and often cause storms(our cool ones) which will tend to fill
the mid and upper-levels with moisture which will spread to the east.
Combine that with having a big abundance of low level moisture in the
east or southeast and the storms will just be a bit more “soupy” or
“grungy”. There are obvious exceptions to this, but a good rule for
seeing structure of storms is the further west towards the Rocky
Mountains you get the better the viewing will be. If you get too close
to them there often isn’t even that much low-level moisture out there.
The dryline, or seperation of moist and dry air, will often run up and
down the middle of the plains states of ND, SD, NE, KS, OK, TX. Some
systems or setups will pull the moisture west to the Rockies but often
even out there it is too dry. So one reason for the crazy look to
storms is where they are located. I think the best area to see storm
structure is located in the middle of ND, SD, NE, KS, OK, TX. Different
systems and setups will obviously dictate how that happens. This area
would be a most typical spot though. Shear on a storm will also have
effects on this. If there is no shear then rain falls very near or even
back down where the upraft was. Supercells happen in good shear
environments(which are also most often found in the middle of the
plains) and will many times have their precip well seperated from the
updraft tower, allowing you to see more of its structure.

Another very big part of this would be the fact I chase the storms. If
I were to sit here in Blair NE, in this good region, I wouldn’t see
very cool storms that often at all. I find the storms early in their
lives(or at least try to) where they form, often very very far from
home(several states away sometimes). Storms will grow into big
complexes the majority of the time. The best structure is often before
that happens. I’m also with the storms for a duration of their life. If
I watched one at home I’d see one point of its life. Chasing them you
see them at many points while they change shapes and severity. I hope
all that makes some sense. I could live in the best area for storm
structure there is and not have much to show for it if I didn’t
actually go and chase, and chase often.

Chase Partners: I’m often asked about ride alongs or if I need a chase
partner. I’ve always chased solo and I’m pretty sure I always will. I
give everyone the same answer and it is nothing against anyone. This
way has always made the most sense to me. Out of all my chases I’ve
only once went with someone. I’ve never had anyone with me(outside
taking my old boss with while out of state at some classes…which I
will admit was enjoyable). It’s nice to only have yourself to worry
about. I’m going to be my only reason to have to stop, and I don’t stop
much. Keeping yourself on the move on a chase day can be very
important. With others around I tend to chat too much. I also don’t
record as much stuff while I’m with others. I make some effort to avoid
others till after the chase is over. I’ve seen myself hang around one
spot too long and be a bit late several times when having others
around. The other part is safety concerns of others. If I’m my only
risk I can do what I want. Chasing alone lets me do as I wish in
regards to how close I might want to get if the opportunity comes up.
There’s also the desire to even be out there that can be in question
when you have others along that may or may not want to chase as much or
as long. I can avoid a lot of possible negatives and potential added
hassles by just sticking to the solo thing. I also go out of my way to
avoid meeting up and caravaning. I’ll only do that with the few chasers
I know fairly well already. But, even then I still kind of try to avoid
it since I know how I am with anyone around(talking too much and don’t
keep on the move enough). So no I don’t have a partner and am not
looking to get one. I hope folks don’t take that personally somehow and
can realize it’s just how I’ve always liked to chase. I’m not 100%
against caravaning with someone I get to know a bit, so I’d have to say
that possibility remains somewhat open. All this aside, if you see a
black Mustang with NE tags parked somewhere don’t be affraid to stop
and say hi if you want. I’m not too big of a jerk, lol.

Chasing Stats: This is just a spot to cover a few things I’ve been
asked or might be asked. I’ve never kept real good stats. I’ve started
to keep better track of chases and mileage the last couple years since
this is my only income right now. My first chase was May 16, 1999. I
saw an F3 tornado in western IA that day. I video taped storms around
town from 91-98 but I wouldn’t call any of that actual chasing since I
never drove very far out of town for any of them.

HERE is a map of the tornadoes I’ve seen and the dates next to them.
I’m pretty “anal” about what I’ll call a tornado anymore. Early on I
wanted everything to be a torando. Now that I’ve seen enough I don’t
see much need in lying to myself. All the dots on there are for sure,
no doubt, tornadoes. That’s really not that many for having been
chasing since 1999, but I’m certainly not going to complain. What isn’t
on that map are all the crazy supercells that I’ve been lucky enough to

I’ve chased in Wyoming, Colorado, South Dakota, North Dakota,
Minnesota, Iowa, Illinois, Indiana, Missouri, Kansas, Oklahoma, Texas,
and Nebraska. In 99 I only had about 5 chases. 2000 I had around 12.
2001 about 20. 2002 and 2003 were probably around 25 chases each. 2004
I think was over 30 chases. 2005 I had 38 chases, the most I’ve had in
a year so far. 2006 should end up about the same as 2005. In 2005 I
drove around 18,000 miles just chasing storms. I’m guessing 2006 will
be very similar to that amount.

The largest hail I’ve seen was up to baseball on a couple occasions. I
was on the Aurora NE storm in 2003 not long after it produced the world
record for hail size. I would have loved to have been in it at the
time. I believe that record stone was over 17 inches around. Highest
wind I’ve been in would be very hard to say. Once it is over 70 mph it
can be hard to judge. I’ve been in winds likely over 70 mph a whole lot
of times. It actually seems easier to find tornadoes than it is to find
hail larger than baseballs or winds to 90 mph or more.”