New Math in HIV Fight
by Mark Schoofs / June 21, 2011

Scientists using a powerful mathematical tool previously applied to the stock market have identified an Achilles heel in HIV that could be a prime target for AIDS vaccines or drugs. The research adds weight to a provocative hypothesis—that an HIV vaccine should avoid a broadside attack and instead home in on a few targets. Indeed, there is a rare group of patients who naturally control HIV without medication, and these “elite controllers” most often assail the virus at precisely this vulnerable area. “This is a wonderful piece of science, and it helps us understand why the elite controllers keep HIV under control,” said Nobel laureate David Baltimore. Bette Korber, an expert on HIV mutation at the Los Alamos National Laboratory, said the study added “an elegant analytical strategy” to HIV vaccine research. “What would be very cool is if they could apply it to hepatitis C or other viruses that are huge pathogens—Ebola virus, Marburg virus,” said Mark Yeager, chair of the physiology department at the University of Virginia School of Medicine. “The hope would be there would be predictive power in this approach.” Drs. Baltimore, Korber and Yeager weren’t involved in the new research.

One of the most vexing problems in HIV research is the virus’s extreme mutability. But the researchers found that there are some HIV sectors, or groups of amino acids, that rarely make multiple mutations. Scientists generally believe that the virus needs to keep such regions intact. Targeting such sectors could trap HIV: If it mutated, it would disrupt its own internal machinery and sputter out. If it didn’t mutate, it would lie defenseless against a drug or vaccine attack. The study was conducted at the Ragon Institute, a joint enterprise of Massachusetts General Hospital, the Massachusetts Institute of Technology and Harvard University. The institute was founded in 2009 to convene diverse groups of scientists to work on HIV/AIDS and other diseases.

Two of the study’s lead authors aren’t biologists. Arup Chakraborty is a professor of chemistry and chemical engineering at MIT, though he has worked on immunology, and Vincent Dahirel is an assistant professor of chemistry at the Université Pierre et Marie Curie in Paris. They collaborated with Bruce Walker, a longtime HIV researcher who directs the Ragon Institute. Their work was published Monday in the Proceedings of the National Academy of Sciences. To find the vulnerable sectors in HIV, Drs. Chakraborty and Dahirel reached back to a statistical method called random matrix theory, which has also been used to analyze the behavior of stocks. While stock market sectors are already well defined, the Ragon researchers didn’t necessarily know what viral sectors they were looking for. Moreover, they wanted to take a fresh look at the virus. So they defined the sectors purely mathematically, using random matrix theory to sift through most of HIV’s genetic code for correlated mutations, without reference to previously known functions or structures of HIV. The segment that could tolerate the fewest multiple mutations was dubbed sector 3 on an HIV protein known as Gag. Previous research by Dr. Yeager and others had shown that the capsid, or internal shell, of the virus has a honeycomb structure. Part of sector 3, it turns out, helps form the edges of the honeycomb. If the honeycomb suffered too many mutations, it wouldn’t interlock, and the capsid would collapse.

For years, Dr. Walker had studied rare patients, about one in 300, who control HIV without taking drugs. He went back to see what part of the virus these “elite controllers” were attacking with their main immune-system assault. The most common target was sector 3. Dr. Walker’s team found that even immune systems that fail to control HIV often attack sector 3, but they tend to devote only a fraction of their resources against it, while wasting their main assault on parts of the virus that easily mutate to evade the attack. That suggested what the study’s authors consider the paper’s most important hypothesis: A vaccine shouldn’t elicit a scattershot attack, but surgical strikes against sector 3 and similarly low-mutating regions of HIV. “The hypothesis remains to be tested,” said Dan Barouch, a Harvard professor of medicine and a colleague at the Ragon institute. He is planning to do just that, with monkeys. Others, such as Oxford professor Sir Andrew McMichael, are also testing it. The Ragon team’s research focused on one arm of the immune system—the so-called killer T-cells that attack other cells HIV has already infected. Many scientists believe a successful HIV vaccine will also require antibodies that attack a free-floating virus. Dr. Chakraborty is teaming up with Dennis Burton, an HIV antibody expert at the Scripps Research Institute in La Jolla, Calif., to apply random matrix theory to central problems in antibody-based vaccines.

Originally developed more than 50 years ago to describe the energy levels of atomic nuclei, the theory is turning up in everything from inflation rates to the behaviour of solids. So much so that many researchers believe that it points to some kind of deep pattern in nature that we don’t yet understand. “It really does feel like the ideas of random matrix theory are somehow buried deep in the heart of nature,” says electrical engineer Raj Nadakuditi of the University of Michigan, Ann Arbor.

How Random-Matrix Theory Found Its Way Into a Promising AIDS Study
by Mark Schoofs / June 21, 2011

Random-matrix theory is a mathematical method for finding hidden correlations within masses of data. It doesn’t just find pairs, a relatively easy task, but can detect groups of many correlated units and even groups that change over time, adding and losing members. The theory was developed in the middle of the 20th century by Nobel laureate Eugene Wigner and others to address problems in nuclear physics. In the 1990s and early 2000s, physicists applied it to the stock market. A major event such as a severe recession will act on almost all stocks together, a correlation so broad it has little use. At the other extreme are millions of random correlations – stocks rising or falling together purely by chance. But some stocks, such as those of car companies and parts makers, act in true correlation.

Sure enough, random-matrix theory filtered out the “noise” of random correlations and overwhelming events to reveal such genuine correlations. One of the authors of that finding, physicist Parameswaran Gopikrishnan, working with Boston University physics professor H. Eugene Stanley, is now a managing director at Goldman Sachs Group Inc. “Of course,” Dr. Stanley said, “we know those sectors are correlated anyway.” But his team found the sectors purely by using random-matrix theory “without looking at the innards of the companies,” he explained. That proved the power of the theory, which Dr. Stanley believes could act as an early-warning system for stock-market analysts. If one company in a sector “wanders away and stops being correlated, that would tell you something is going on” in that firm. Arup Chakraborty, a chemistry and chemical engineering professor at MIT, knew of random-matrix theory from the stock market work and from a scientific colleague who had used it to analyze enzymes, though not in HIV. Dr. Chakraborty thought it could help find sectors of HIV that rarely undergo multiple mutations – and it did.

The deep law that shapes our reality
by Mark Buchanan / 07 April 2010

Suppose we had a theory that could explain everything. Not just atoms and quarks but aspects of our everyday lives too. Sound impossible? Perhaps not. It’s all part of the recent explosion of work in an area of physics known as random matrix theory. Originally developed more than 50 years ago to describe the energy levels of atomic nuclei, the theory is turning up in everything from inflation rates to the behaviour of solids. So much so that many researchers believe that it points to some kind of deep pattern in nature that we don’t yet understand. “It really does feel like the ideas of random matrix theory are somehow buried deep in the heart of nature,” says electrical engineer Raj Nadakuditi of the University of Michigan, Ann Arbor.

All of this, oddly enough, emerged from an effort to turn physicists’ ignorance into an advantage. In 1956, when we knew very little about the internal workings of large, complex atomic nuclei, such as uranium, the German physicist Eugene Wigner suggested simply guessing. Quantum theory tells us that atomic nuclei have many discrete energy levels, like unevenly spaced rungs on a ladder. To calculate the spacing between each of the rungs, you would need to know the myriad possible ways the nucleus can hop from one to another, and the probabilities for those events to happen. Wigner didn’t know, so instead he picked numbers at random for the probabilities and arranged them in a square array called a matrix.

The matrix was a neat way to express the many connections between the different rungs. It also allowed Wigner to exploit the powerful mathematics of matrices in order to make predictions about the energy levels. Bizarrely, he found this simple approach enabled him to work out the likelihood that any one level would have others nearby, in the absence of any real knowledge. Wigner’s results, worked out in a few lines of algebra, were far more useful than anyone could have expected, and experiments over the next few years showed a remarkably close fit to his predictions. Why they work, though, remains a mystery even today. What is most remarkable, though, is how Wigner’s idea has been used since then. It can be applied to a host of problems involving many interlinked variables whose connections can be represented as a random matrix.

The first discovery of a link between Wigner’s idea and something completely unrelated to nuclear physics came about after a chance meeting in the early 1970s between British physicist Freeman Dyson and American mathematician Hugh Montgomery. Montgomery had been exploring one of the most famous functions in mathematics, the Riemann zeta function, which holds the key to finding prime numbers. These are numbers, like 2, 3, 5 and 7, that are only divisible by themselves and 1. They hold a special place in mathematics because every integer greater than 1 can be built from them. In 1859, a German mathematician called Bernhard Riemann had conjectured a simple rule about where the zeros of the zeta function should lie. The zeros are closely linked to the distribution of prime numbers.

Mathematicians have never been able to prove Riemann’s hypothesis. Montgomery couldn’t either, but he had worked out a formula for the likelihood of finding a zero, if you already knew the location of another one nearby. When Montgomery told Dyson of this formula, the physicist immediately recognised it as the very same one that Wigner had devised for nuclear energy levels. To this day, no one knows why prime numbers should have anything to do with Wigner’s random matrices, let alone the nuclear energy levels. But the link is unmistakable. Mathematician Andrew Odlyzko of the University of Minnesota in Minneapolis has computed the locations of as many as 1023 zeros of the Riemann zeta function and found a near-perfect agreement with random matrix theory. The strange descriptive power of random matrix theory doesn’t stop there. In the last decade, it has proved itself particularly good at describing a wide range of messy physical systems.

Universal law?
Recently, for example, physicist Ferdinand Kuemmeth and colleagues at Harvard University used it to predict the energy levels of electrons in the gold nanoparticles they had constructed. Traditional theories suggest that such energy levels should be influenced by a bewildering range of factors, including the precise shape and size of the nanoparticle and the relative position of the atoms, which is considered to be more or less random. Nevertheless, Kuemmeth’s team found that random matrix theory described the measured levels very accurately ( A team of physicists led by Jack Kuipers of the University of Regensburg in Germany found equally strong agreement in the peculiar behaviour of electrons bouncing around chaotically inside a quantum dot – essentially a tiny box able to trap and hold single quantum particles (Physical Review Letters, vol 104, p 027001). The list has grown to incredible proportions, ranging from quantum gravity and quantum chromodynamics to the elastic properties of crystals. “The laws emerging from random matrix theory lay claim to universal validity for almost all quantum systems. This is an amazing fact,” says physicist Thomas Guhr of the Lund Institute of Technology in Sweden.

Random matrix theory has got mathematicians like Percy Deift of New York University imagining that there might be more general patterns there too. “This kind of thinking isn’t common in mathematics,” he notes. “Mathematicians tend to think that each of their problems has its own special, distinguishing features. But in recent years we have begun to see that problems from diverse areas, often with no discernible connections, all behave in a very similar way.” In a paper from 2006, for example, he showed how random matrix theory applies very naturally to the mathematics of certain games of solitaire, to the way buses clump together in cities, and the path traced by molecules bouncing around in a gas, among others. The most important question, perhaps, is whether there is some deep theory behind both physics and mathematics that explains why random matrices seem to capture essential truths about reality. “There must be some reason, but we don’t yet know what it is,” admits Nadakuditi. In the meantime, random matrix theory is already changing how we look at random systems and try to understand their behaviour. It may possibly offer a new tool, for example, in detecting small changes in global climate.

Back in 1991, an international scientific collaboration conducted what came to be known as the Heard Island Feasibility Test. Spurred by the idea that the transmission of sound through the world’s oceans might provide a sensitive test of rising temperatures, they transmitted a loud humming sound near Heard Island in the Indian Ocean and used an array of sensors around the world to pick it up. Repeating the experiment 20 years later could yield valuable information on climate change. But concerns over the detrimental effects of loud sounds on local marine life mean that experiments today have to be carried out with signals that are too weak to be detected by ordinary means. That’s where random matrix theory comes in.

Over the past few years, Nadakuditi, working with Alan Edelman and others at the Massachusetts Institute of Technology, has developed a theory of signal detection based on random matrices. It is specifically attuned to the operation of a large array of sensors deployed globally. “We have found that you can in principle use extremely weak sounds and still hope to detect the signal,” says Nadakuditi. Others are using random matrix theory to do surprising things, such as enabling light to pass through apparently impenetrable, opaque materials. Last year, physicist Allard Mosk of the University of Twente in the Netherlands and colleagues used it to describe the statistical connections between light that falls on an object and light that is scattered away. For an opaque object that scatters light very well, he notes, these connections can be described by a totally random matrix.

What comes up are some strange possibilities not suggested by other analyses. The matrices revealed that there should be what Mosk calls “open channels” – specific kinds of waves that, instead of being reflected, would somehow pass right through the material. Indeed, when Mosk’s team shone light with a carefully constructed wavefront through a thick, opaque layer of zinc oxide paint, they saw a sharp increase in the transmission of light.
Random matrix theory comes up with strange possibilities not suggested by other analyses, which are then borne out by experiments

Still, the most dramatic applications of random matrix theory may be yet to come. “Some of the main results have been around for decades,” says physicist Jean-Philippe Bouchaud of the École Polytechnique in Paris, France,” but they have suddenly become a lot more important with the handling of humungous data sets in so many areas of science.” In everything from particle physics and astronomy to ecology and economics, collecting and processing enormous volumes of data has become commonplace. An economist may sift through hundreds of data sets looking for something to explain changes in inflation – perhaps oil futures, interest rates or industrial inventories. Businesses such as rely on similar techniques to spot patterns in buyer behaviour and help direct their advertising. While random matrix theory suggests that this is a promising approach, it also points to hidden dangers. As more and more complex data is collected, the number of variables being studied grows, and the number of apparent correlations between them grows even faster. With enough variables to test, it becomes almost certain that you will detect correlations that look significant, even if they aren’t.

Curse of dimensionality
Suppose you have many years’ worth of figures on a large number of economic indices, including inflation, employment and stock market prices. You look for cause-and-effect relationships between them. Bouchaud and his colleagues have shown that even if these variables are all fluctuating randomly, the largest observed correlation will be large enough to seem significant. This is known as the “curse of dimensionality”. It means that while a large amount of information makes it easy to study everything, it also makes it easy to find meaningless patterns. That’s where the random-matrix approach comes in, to separate what is meaningful from what is nonsense.

In the late 1960s, Ukrainian mathematicians Vladimir Marcenko and Leonid Pastur derived a fundamental mathematical result describing the key properties of very large, random matrices. Their result allows you to calculate how much correlation between data sets you should expect to find simply by chance. This makes it possible to distinguish truly special cases from chance accidents. The strengths of these correlations are the equivalent of the nuclear energy levels in Wigner’s original work. Bouchaud’s team has now shown how this idea throws doubt on the trustworthiness of many economic predictions, especially those claiming to look many months ahead. Such predictions are, of course, the bread and butter of economic institutions. But can we believe them?

To find out, Bouchaud and his colleagues looked at how well US inflation rates could be explained by a wide range of economic indicators, such as industrial production, retail sales, consumer and producer confidence, interest rates and oil prices. Using figures from 1983 to 2005, they first calculated all the possible correlations among the data. They found what seem to be significant results – apparent patterns showing how changes in economic indicators at one moment lead to changes in inflation the next. To the unwary observer, this makes it look as if inflation can be predicted with confidence. But when Bouchaud’s team applied Marcenko’s and Pastur’s mathematics, they got a surprise. They found that only a few of these apparent correlations can be considered real, in the sense that they really stood out from what would be expected by chance alone. Their results show that inflation is predictable only one month in advance. Look ahead two months and the mathematics shows no predictability at all. “Adding more data just doesn’t lead to more predictability as some economists would hope,” says Bouchaud.

In recent years, some economists have begun to express doubts over predictions made from huge volumes of data, but they are in the minority. Most embrace the idea that more measurements mean better predictive abilities. That might be an illusion, and random matrix theory could be the tool to separate what is real and what is not. Wigner might be surprised by how far his idea about nuclear energy levels has come, and the strange directions in which it is going, from universal patterns in physics and mathematics to practical tools in social science. It’s clearly not as simplistic as he initially thought.