Voice recognition is only ever going to get better (check out CMU Sphinx if you want to play with speech rec. for yourself), but at the moment… um, it’s not quite up to scratch:
Okay, so he’s putting in on a bit for the lulz, and if he’d just said “where’s the nearest pub?” he’d probably get a useful answer – but that’s half the battle with speech recognition. At the moment we have to adapt our speech to the software, but in the future I don’t think we’ll have to at all.
At least for the time being, as Penny Arcade put it, it’s all going to be a bit like this:
The Kinect has really fired people’s imaginations and there’s some great work happening right now – I can’t wait until I become a part of it =D
1.) The OpenNI library working with the Kinect to perform skeletal mapping:
2.) Using gestures and voice commands to navigate medical imagery:
3.) Apparently Microsoft are working on modifying the Kinect to quadruple its 3D sensing (structured light camera) accuracy from 320×200 to 640×480, at which point it will be able to detect fingers and other small features: ms-quadrupling-kinect-accuracy [eurogamer].