Mark Fischer, programmer and whale researcher, has adapted a mathematical algorithm known as wavelets to graph cetacean calls. He is mapping whale calls seeking underlying patterns, with the ultimate goal of exploring the potential for cetacean language. Each wavelet here represents between .25 to 1 second of time. The frequency spectrum is limited to the human audible range. These images have been displayed in art galleries in Europe and the USA. In 2005, Interspecies has successfully fundraised to grant a new computer system to Mark, allowing him to generate these images as high resolution movies. His art is printed on museum quality paper using archival inks.
After a three-year “walkabout” in Baja California, an artist and software designer named Mark Fischer became fascinated by cetacean acoustics. As a trained computer engineer, he soon realized that the visual representations of whale song had not advanced much beyond crude graphs and spectrograms. There was nothing that adequately captured the sheer beauty of sounds that can be louder than a jet engine and as melodic as the human voice. Fischer found his solution in the mathematical theory of wavelets, which he applied to sounds from different frequencies, translating them into color-coded visual forms. “It’s a kind of photography to me,” Fischer says, “with mathematics as the lens and the computer as the camera.” He calls the result “the shape of the sound.”
email : aguasonic [at] yahoo [dot] com
Mark comments that the colors he chooses are mostly random, although some display red as low frequencies and blue as high.
From Mr. Jay Barlow’s recording of a Minke “boing”, posted on the Southwest Fisheries web site. A biorthogonal 3.7 transform.
Generated from a recording of a Bowhead whale, sampled from Cornell’s”Animal Diversity” CD. This graph uses the Debaauchies’ 15′ base function
A biorthogonal 3.3 graph of a wonderful recording of a Hawaiian humpback made by Mr. Salvatore Cerchio
Generated from an audio sample borrowed from disc 1, track 2 of the “Fins ’93 CD, produced by Cornell University, using the biorthogonal 3.3 base function.
This wavelet transform is generated from a recording of a bottlenose dolphin made by Mr. Jay Barlow.
A biorthogonal 3.1 graph made from the so-called “B” call of a northeast Pacific blue whale, recorded by the US Navy’s SOSUS hydrophone system.
This wavelet is generated from a .25 second orca call recorded near Vancouver Island by Dr. Paul Spong.
Recorded near the Azores Islands.
Generated from a recording made by Dr. Fred Sharpe. The original audio may be heard on the web site of Simon Fraser university in B.C.
Generated using the Haar base function. From a recording of a pod of pilot whales made near the Channel Islands off southern California.
One second-long beluga chirp recorded on the White Sea by Jim Nollman of Interspecies.com. Generated using a Symlet 7 graph.
A .25 second recording of this small dolphin species, recorded off Vancouver Island by Jim Nollman of Interspecies.com. This is a biorthogonal 3.1 graph.
So what about belugas? Bioacousticians at Moscow’s Shirshoff Institute analyzed beluga calls, and recently demonstrated that these whales employ at least 24 phonemes to compose “words” in what appears to be an expression of true language. OK, so you don’t trust the Russians. Jim Nollman (http://www.interspecies.com/) notes that human language is overwhelmingly time-dependent. This paragraph could not be spoken in a second, but it would be understood if spoken at different frequencies. A very open-minded musician, Jim notes that “Belugas produce a dense, multilayered variety of calls that make use of an extremely broad band of frequencies, ten times wider than humans can physically hear, and yet vocalized in time spans as short as one hundredth of a second. They also appear to control the interference patterns across this wide frequency spectrum. Humans hear interference patterns (for one example) as the beats that occur while tuning one guitar string to another one. Cetacean echolocation both transmits and resolves as beats. If an echolocating species were to develop language, it would likely be based, not on time like human language, but on a symbolic derivation of these broadband beats.”
Jim goes on to say “a beat language would probably convey its messages through sonic imaging or holograms as much as through phonemes and sentences as we humans have developed to fit our own sensual receptors.” We can only guess what kind of information can be conveyed using a sophisticated beat structure, with some beluga calls so incredibly wide-frequency, with so many beats modulating at any single moment, that Jim describes “the potential content of a one-second whale call as containing an entire feature film’s worth of information.” If you have a hard time picturing this, Mark Fischer (http://aguasonic.com/) has adapted wavelet graphing techniques to study cetacean calls, finding that some beats do modulate in frames of hundredths of a second. Take a look at that web site and let your mind loose. Here’s just one of Mark’s graphs of a beluga call to convey some of the complexity crammed in a very short time. Notice the circles.
But this intriguing image is only a random sample of a call without context, without knowing what was happening around the whale when the sound was made, without a clue about the motivation for the call. To make real progress these two gifted people need high quality recordings of specific calls by individuals in specific contexts. They need help from the experts who can define those contexts. They need the funds to have a real computer churn numbers, and to put a broad team together in earnest. This is not about talking to belugas, but finding a way across an intellectual and cultural gap of our own making. We don’t have to talk to belugas to learn something significant from the way they talk to each other. They’ve probably been making complex calls a lot longer than humans have had language, and it’s about time we understood a little more about it. Any suggestions?
user: re_print / pass: re_print
Subtle Math Turns Songs of Whales Into Kaleidoscopic Images
BY Gretchen Cuda / August 1, 2006
What do whale songs and wavelets have in common? Quite a bit, and the wavelets have nothing to do with water. Mark Fischer found a mathematical tool to translate the subtlety and nuance of whale and dolphin sounds into these mandala-like images. In a Northern California studio, Mark Fischer, an engineer by training, uses wavelets – a technique for processing digital signals – to transform the haunting calls of ocean mammals into movies that visually represent the songs and still images that look like electronic mandalas. Mr. Fischer learned about acoustics by developing software for Navy sonar and the telecommunications industry. Years later, a serendipitous brush with whale researchers in Baja California led him to take a closer look at whales and the diversity of their intricate underwater communication. “I don’t think anyone has ever spent even a little time around a whale and not been amazed by it,” Mr. Fischer said in an interview.
Mr. Fischer creates visual art from sound using wavelets. Once relatively obscure, wavelets are being used in applications as diverse as JPEG image compression, high definition television and earthquake research, said Gilbert Strang, a math professor at the Massachusetts Institute of Technology and an expert on wavelets. They are popular now in part because they can capture intricate detail without losing the bigger picture, and when presented in circular form (using a cylindrical coordinate system), repeated patterns are even more evident. By stringing successive images together, Mr. Fischer transforms still images into animated audio files that bring the sound to life.
Among whales, certain sounds and patterns are unique to different species, and even individuals in a group – something like an auditory fingerprint, Mr. Fischer said. “To anyone who doesn’t listen to it on a regular basis it sounds like a bunch of clicks,” he said. “But if you’re a whale – or someone who studies whales – it becomes clear that they have their own dialects.” Wavelets are capable of picking up those distinctions, Mr. Fischer said, nuances that may be missed by the human ear or less detailed visualization methods. “You can pick out any one of those movies and I’ll tell you what it is without hearing a thing,” he said. “The differences are that dramatic.” He envisions a day when researchers may be able to use images generated using wavelets to identify and track individual whales.
Peter Tyack agrees that the technique has potential not only as art, but as a scientific research tool. A senior scientist at Woods Hole Oceanographic Institution, Dr. Tyack studies the way humpback whales communicate, trying to show that the repetitions in whale songs follow grammatical rules similar to those of human language. “Looking at those figures, it looked like you could see a lot of
repeated units,” Dr. Tyack said of the images. “It looks like he’s visualizing some of the points that we made in the paper about humpback song.” Despite having analyzed recordings from at least 16 species of whales, Mr. Fischer said he had just scratched the surface. “It’s still a wide-open world out there,” he said. “You think you’re in the 21st century and we have the means to get anything, but when it concerns the deep ocean there is still quite a bit of mystery.”
In the meantime, Mr. Fischer hopes that by merging science and art, he will inspire a greater appreciation of whales among both marine biologists and the public, as he gives many people a glimpse of a world they would otherwise never experience. “It’s a very rare opportunity to be in the water listening to a whale,” he said. A picture, on the other hand, is something you can hang on your wall and look at every day. “When you see what whales are doing with sound, or begin to see what they are capable of, it is clear that humans are not the only artists on the planet,” he said.