Visually, the researchers have demonstrated the ultrasound patterns by directing the device at a thin layer of oil so that the depressions in the surface can be seen as spots when lit by a lamp.
They were probably referring to the oil depressions being visible but the title is certainly misleading.
While this certainly looks better (cleaner + higher resolution and range) than MIT's inFORM (http://tangible.media.mit.edu/project/inform/), it's still bound to a surface, and effectively just a 2D monochrome image presented with pressure instead of light.
What I want to see is a haptic method that can represent complex topologies, and at the moment, it looks like that's still gloves (although this will certainly be useful for quick/casual interactions, as well as its applications in the medical field as michaeljansen notes).
Agreed, amazing new interfaces could be made if we could touch a complex 3d topology. Although the demonstration fell flat of those expectations, I still think that sound is the way to go in the future.
I could wrong about this intuition, but I think this mainly because of the simple fact that sound is just disturbances in a medium that we are walking through constantly and can feel to some degree (ie air).
Almost any announcement associated with Bristol Interaction and Graphics (BIG) should be taken with a pinch of salt. The group follows a similar approach to MIT's media lab - lots of conference publications and heavy on media exposure.
Does anyone remember the demos of Pranav Mistry's sixth sense device in 2011?
With this kind of technology built into Playstation 7 you'll be able to play FIFA 2025 in a big living room with all your friends, with an entire mini football pitch projected in the middle of everyone (in 4D?).
It does however say "Visually, the researchers have demonstrated the ultrasound patterns by directing the device at a thin layer of oil so that the depressions in the surface can be seen as spots when lit by a lamp."
Although I assume that's not the normal mode of operation.
This solves a really big problem with gestural interaction, the lack of feedback other than visual cues. Right now the only way to get this is through gloves, rings etc. which are impractical at best and impossible for many of the early use cases (for example: surgeons who need to keep their hands sterile).