By essentially turning down the pitch of sound waves, engineering researchers have created a way to unlock greater amounts of data from acoustic fields than ever before.
That additional information could boost performance of passive sonar and echolocation systems for detecting and tracking adversaries in the ocean; medical imaging devices; seismic surveying systems for locating oil and mineral deposits; and possibly radar systems as well.
“Acoustic fields are unexpectedly richer in information than is typically thought,” says David Dowling, a professor in the mechanical engineering department at the University of Michigan.
Screeching sounds
Dowling likens his approach to solving the problem of human sensory overload.
Sitting in a room with your eyes closed, you would have little trouble locating someone speaking to you at normal volume without looking. Speech frequencies are right in the comfort zone for human hearing.
Now, imagine yourself in the same room when a smoke alarm goes off. Sound waves at higher frequencies generate that annoying screech, and in the midst of them, it would be difficult for you to locate the source of the screech without opening your eyes for additional sensory information. The higher frequency of the smoke alarm sound creates directional confusion for the human ear.
“The techniques my students and I have developed will allow just about any signal to be shifted to a frequency range where you’re no longer confused,” says Dowling.
Sonar without the confusion
Navy sonar arrays on submarines and surface ships deal with a similar kind of confusion as they search for vessels on the ocean surface and below the waves. The ability to detect and locate enemy ships at sea is a crucial task for naval vessels.
Sonar arrays are typically designed to record sounds in specific frequency ranges. Sounds with frequencies higher than an array’s intended range may confuse the system; it might be able to detect the presence of an important contact but still be unable to locate it.
Any time sound is recorded, a microphone takes the role of the human ear, sensing sound amplitude as it in varies in time. Through a mathematical calculation known as a Fourier transform, scientists can convert sound amplitude versus time to sound amplitude versus frequency.
With the recorded sound translated into frequencies, Dowling puts his technique to use. He mathematically combines any two frequencies within the signal’s recorded frequency range, to reveal information outside that range at a new, third frequency that is the sum or difference of the two input frequencies.
“This information at the third frequency is something that we haven’t traditionally had before,” he says.
In the case of a Navy vessel’s sonar array, that additional information could allow it to reliably locate an adversary’s ship or underwater asset from farther away or with recording equipment that was not designed to receive the recorded signal. In particular, tracking the distance and depth of an adversary from hundreds of miles away—far beyond the horizon—might be possible.
From sonar to ultrasounds
And what’s good for the Navy may also be good for medical professionals investigating areas of the body that are hardest to reach, such as inside the skull. Similarly, the research could improve remote seismic surveys that parse through the earth seeking oil or mineral deposits.
“The science that goes into biomedical ultrasound and the science that goes into Navy sonar are nearly identical,” Dowling says. “The waves that I study are scalar, or longitudinal, waves. Electromagnetic waves are transverse, but those follow similar equations. Also, seismic waves can be both transverse and longitudinal, but again they follow similar equations.
“There’s a lot of potential scientific common ground, and room to expand these ideas,” says Dowling.
The study appears in Physical Review Fluids. The US Navy primarily funds Dowling’s work.
Source: University of Michigan