Michael Garfield's Love Without End Tour Newsletter: A Window Into The Future Of Sound

24 April 2008

A Window Into The Future Of Sound

The way we listen to music today is not going to last. A bevy of new technologies is set to radically change our relationship to auditory media. novel speaker materials, remarkable advances in recording equipment, and pioneering mind-machine interfaces have perched our culture on the verge of a world we would scarcely recognize: where music can be played back on any surface, where headphones have been replaced by custom isolated open-air audioscapes, and where we don't even need mouths to sing or hands to play our instruments. For your consideration, I present the following major innovations - each of which, sooner or later, will force us to reconsider what we think we know about communication.

SURFACESOUND

The first, like many wonderful discoveries, came from failure - failure by the UK Ministry of Defense to find a suitable material for dampening the sound of their helicopters. Instead, they stumbled upon a unique honeycombed structure that conducts sound with surprising efficiency. Already, the technology as been sold to NXT Sound, named SurfaceSound, and crafted into folding flat-panel speakers (14 mm thick) and "speakerless" automobile interiors and mobile phones.
It has also been fashioned into transparent overlays for computer screens, which can be segregated into as many as SIX isolated sound panes. It's only a matter of time (less than a year, according to NXT's projections) before we have integrated speakers in our greeting cards and digital photo displays, and ultra-thin clip-on speakers for juicing up obsolete non-musical surfaces. One of the most exciting prospects for SurfaceSound is as a responsive natural interface for audio engineering - according to the Discovery News article, it "can be made to vibrate when touched, with individual frequencies tailored to each finger" (a benefit of its capacity to be partitioned). With the ability to place sound-conducting surfaces almost anywhere imaginable, the next challenge for NXT seems simple enough: to make "silent loudspeakers" which can only be heard when the listener is in direct contact with the speaker surface.

AUDIO SPOTLIGHT

It's an end that may already have been achieved, albeit differently, by Holosonic Research Labs. Their incredibly cool Audio Spotlight technology fires a narrow beam of ultrasound that distorts in a predictable pattern through as it travels through air. The result is the sonic equivalent of a laser - an invisible ray of sound that can only be heard by someone standing directly in its path. (Their technical explanatory page can be found here.)
I'll say it again: Audio Spotlight turns the AIR into a loudspeaker that can only be heard by standing INSIDE of it. Sound can be projected like a beam of light, bounced off of surfaces, and manipulated in all kinds of other novel ways. The New York Times called Audio Spotlight "the most radical technological development in acoustics since the coil loudspeaker was invented in 1925," and with good reason:

Headphone museum tours are over, soon to be replaced with isolated audio programs for each display. You can listen to music over open air in the public library. The insane cacophony of public advertisements will be forgotten in favor of more discrete "hotspots" pedestrians will learn to systematically avoid. Performing musicians will be able to broadcast multiple submixes into their audiences to compensate for micro-variations in venue acoustics - or even play several concerts at once, through which listeners can move as they dance from one end of the room to the other. You'll never register a noise complaint against your neighbor's bassy stereo system again. The technology is already being adapted by an impressive array of clients, including

Eastman Kodak, Hewlett-Packard, GM, Motorola, and Walt Disney Imagineering (the guys who build the rides). (A full list of current applications can be found here.) It isn't long before our children are digging iPod earbuds out of the attic and querying their internet implants as to what the hell those things are...

EPOC

And when they do, they'll probably be using technology similar to Emotiv Systems' Epoc, a new videogaming interface that replaces handheld controllers with a mind-reading headset.
Combining 100 year old EEG technology with new software algorithms that analyze human brainwave patterns, the Epoc is a glorified biofeedback device, enabling its users to navigate computer interfaces with nothing more than intent. Beyond its immediate gaming applications (headsets will be on the market for $300 this Christmas), Emotiv is exploring numerous applications in robotics, education, and medicine - making it possible, for example, for quadraplegics to operate household devices on their own. I'm giving us a year before progressive musical acts are using these or similar headsets to control electronic music production arrays - heralding the advent of a long-imagined age when artists are able to directly convey their thoughts to an audience.

(A speculative recipe: combining the Epoc with the Audio Spotlight yields the potential for multi-scaped audio arrays that are activated and operated without so much as lifting a finger.)

And if that weren't enough, it is easy to imagine how such a device - apparently already well on the road to ubiquity - might catalyze a radical development of mental acuity in our culture. Having to learn what is currently an uncommon finesse with concentration and intent could well improve the focus and self-control of everyone who uses it...and already, I can hear the next generation marvel with pity and disbelief at our limited attention spans and cognitive agency.

(More on this here: Discovery Channel)

AUDEO

The Epoc's clever decryption of brainwave semantics has its limitations, however. One significant "drawback" (if that term can even be applied to such a stunning advancement) is that it cannot read your brain with enough precision to decode speech. You'll still have to move your mouth to talk...

...UNLESS, that is, you're using Ambient Corporation's Audeo, a neckband-mounted microchip that relays nerve impulses on their way to the focal cords to a computer, where they are translated into an audible computerized voice.
Although the device can currently recognize fewer than 200 words, Ambient is working to release an improved model by the end of the year that recognizes individualphonemes and has a functionally limitless vocabulary. Michael Callahan, Ambient'stwenty-four year old co-founder, recently placed the first public "voiceless phone call" at a recent technology conference (You can find the video embedded in The New Scientist's recent article). In support of my generation-of-techno-yogis hypothesis, Callahan says that making clean electrical signals that the Audeo can understand requires the specific, deliberate imagining of voicing each word - something he calls "a level above thinking."

It's an innovation whose significance extends beyond the obvious enabling-speech-in-the-mute. Private telephone calls will be made in public, by people who look like they're listening to you. Ventriloquism through invisible wallspeakers and audio beams will further challenge our confidence in human perception. Maybe our hyper-attentive descendents will be able to deliver two different speeches at once. (Most of us already know how to talk without thinking...all it would require is to also talk whilethinking. It's like riding a bike.)

But for me, fettered as I am by my unilingual peasantry, one application takes the cake: linguistic software could be packed into that auxiliary computer, finally realizing something not too distant from the long-fantasized Universal Translator.

It's not technomusical telepathy, but it's close. We're getting there. Yes, indeed: the future is singing quite a tune.

(Written for iggli.com.)