Page 1 of 2
Disruptive user-interface technology available soon
Posted: Mon Dec 03, 2012 8:59 pm
by DeltaV
https://leapmotion.com/
Not Kinect. For $70 you get 0.01mm resolution within an 8 ft^3 space. Uses infrared.
Check out the video on the home page, especially the point clouds of the hands.
This will be big.
Re: Disruptive user-interface technology available soon
Posted: Mon Dec 03, 2012 10:39 pm
by Diogenes
DeltaV wrote:https://leapmotion.com/
Not Kinect. For $70 you get 0.01mm resolution within an 8 ft^3 space. Uses infrared.
Check out the video on the home page, especially the point clouds of the hands.
This will be big.
Very cool.
Posted: Mon Dec 03, 2012 11:24 pm
by zapkitty
Dammitall to hell...
... how are we going to get to the point of mech pilots wearing skintight data films when they can just use this gadget instead?!...
... Sorry, Shirow-san, the present has overrun the future again... *sob*...
Posted: Tue Dec 04, 2012 3:40 pm
by krenshala
zapkitty wrote:Dammitall to hell...
... how are we going to get to the point of mech pilots wearing skintight data films when they can just use this gadget instead?!...
... Sorry, Shirow-san, the present has overrun the future again... *sob*...
The skinsuit is a serious contender for an actual pressure suit, so you may get your wish.

When disruptive technologies combine ....
Posted: Tue Dec 04, 2012 5:04 pm
by SheltonJ
If you combine this ability to 3D scan to 0.01mm in an 8 cubic foot space with appropriate software and 3D printing, some amazing things become possible. The time to create a 3D model of a physical object at very high resolution will drop significantly. This combined with 3D printing would allow the cheap replication of spare parts given an undamaged original part. Damaged parts would of course require cleanup to 'remove' the damage from the scan.
Another very interesting area would be to use this interface technique to enhance 3D modeling software to allow a more direct manipulation style or even 'air sculpting'.
Wow, just wow.
Posted: Tue Dec 04, 2012 6:02 pm
by DeltaV
According to various reports, 'air sculpting' was their original motivation to develop this.
Posted: Wed Dec 05, 2012 3:57 pm
by DeltaV
Posted: Wed Dec 05, 2012 4:55 pm
by Skipjack
I have seen it a while ago. Very interesting technology!
Posted: Wed Dec 05, 2012 6:18 pm
by DeltaV
http://www.technologyreview.com/news/50 ... ntrol-era/
Leap’s founders won’t share exact details of their technology, but Holz says that unlike the Kinect, the Leap doesn’t project a grid of infrared points onto the world that are tracked to figure out what is moving and where (see the pattern produced by the Kinect sensor).
Despite having two cameras, the Leap does not use stereovision techniques to determine depth, says Holz. Instead, the second camera is to provide an extra source of information and prevent errors due to parts of a person’s hand obscuring itself or the other hand.
Posted: Wed Dec 05, 2012 7:00 pm
by Maui
Cool, but I'm not sure I see this catching on in the long run for desktops (I guess the counter-argument is that desktop's are dying, eh).
I honestly think I'd be faster and more accurate with the mouse, plus I gotta think mouse buttons are always going to be more precise and reliable that gesture recognition.
Plus, wouldn't you get tired holding your arms out in front of yourself the whole day? Hey, I know I could use more exercise, but...
I thought I heard years ago that someone was working on a pointer that tracked the direction of your gaze. Combine that with this for auxiliary functions and maybe I'm interested.
Posted: Fri Dec 07, 2012 5:51 am
by DeltaV
Maui wrote:Plus, wouldn't you get tired holding your arms out in front of yourself the whole day? Hey, I know I could use more exercise, but...
I thought I heard years ago that someone was working on a pointer that tracked the direction of your gaze. Combine that with this for auxiliary functions and maybe I'm interested.
As I read it, you can rest your hand on the desk and just move one or two fingers, if need be. The 'gains' (motion scale factors) in the software can be tuned to fit your particular style of use.
The gaze sensor might be problematic, since human eyes move in
saccades which the conscious mind is usually not aware of. Not saying a saccade filter could not be developed. Something similar might be needed with Leap for people with Parkinson's, etc.
Posted: Fri Dec 07, 2012 5:57 am
by DeltaV
I'd like to know how the point cloud points beyond line-of-sight (such as the backs of the fingers) are obtained. Infrared diffraction? But, that is part of the secret sauce...
Posted: Fri Dec 07, 2012 6:26 am
by Maui
That's interesting about the saccades. I learned something today, I like that.
I guess I'll have to wait for the mind reading tech then...
Posted: Fri Dec 07, 2012 11:27 pm
by paperburn1
DeltaV wrote:I'd like to know how the point cloud points beyond line-of-sight (such as the backs of the fingers) are obtained. Infrared diffraction? But, that is part of the secret sauce...
I suspect
http://www.technovelgy.com/ct/Science-F ... wsNum=3823
Posted: Sat Dec 08, 2012 1:15 am
by Blankbeard
DeltaV wrote:The gaze sensor might be problematic, since human eyes move in
saccades which the conscious mind is usually not aware of. Not saying a saccade filter could not be developed. Something similar might be needed with Leap for people with Parkinson's, etc.
OpenCV does gaze tracking
http://hackaday.com/2012/05/30/opencv-k ... -tracking/
I wonder if that's what they use. Massively useful software. Body part tracking, face id, people location, counting, tracking. And a lot of it requires no more than smartphone style hardware.