Occasionally, we simple-celled organisms get a peek into the future, and this my friends is it. The presentation video, above, by Jeff Han at the TED Talks on the multi-touch computer screen, says for itself more than I can--so watch it--but here I go anyway. Han displays its application by intuitively creating puppets, exploring 3D maps, and simulating various animations.Technology is always advancing (lately anyway), shocking the senses and promising to improve the function of our lives. This computer screen registers multiple touch and intensity. And those are just pretty showboats toys to present this new technology. Those apps will of course have their uses for creative producers and even the military, but it was really the wallflower among the demos that grabbed my attention. But with this screen, something as simple as manipulating documents on your 'desktop' becomes more like a super-sensual videogame.
This is could be revolution in typical computer interaction: providing, literally, the hands-on ability to manipulate visual representations of data--to zoom-in and out while simultaneously moving objects in opposite directions, and (if you're dexterous enough) also creating new data.
Hmm, imagine what that could do in a search engine. Already there are start-ups, seeking to nudge the search giants Goog and Y!, that are finding their added-value by tossing the paradigm of the one-text-box-answers-all. But most of these upstart innovations work within the fixed web frame we're so used to: Type, click, specify, zero in, backstep, type again, specify, hunt, find.
Imagine, searching for keyword-associated results could be more like this: instead of searching through a stack of papers in a linear hierarchy of relevancy, it could be like diving into a swimming pool, where there are several piles, where there is one pile for each keyword ranked first. And the stacks are arranged to radiate out infront of you, like a semi circle. All the pages are floating, so that every posterior page in a pile is also visible, peeking out from behind. At a glance, you might be able to reach forward with the key gestures, zooming-in to your desired result, just as you would reach forward to swim to your destination in a pool. And, you can imagine the same for image search, or even sound.
Going back to the art-side of it, this could fundamentally bridge the divide between traditional artists and digital creators. In the recent wave of technological advance, isn't it true that most artists have been left behind. If you don't learn these new tools, in order to serve your talent directly into the digital interface, you're basically obsolete. Sculpture and paintings are good for a gallery, but lack the viability outside the digital vector landscape of Illustrator, or even CAT. Controlling digital creativity without resigning oneself to Adobe's tortuous layers, what a concept. It would be wonderful to build webpages in a software couched in this pliable screen. Or construct a architectural model. Or build a virtual landcape, like second life. Or, you know, make a movie.
The coreduo chip in the macpro is exciting and all, but this requires the exclamation of foreign words to really capture the exotic thrill of this innovation. Best to Han and his lab--- it will be years before this technology is ready for the market (it was only a research demo, proof of concept), but (( WOW )).