Now using two webcams (the other is above, out of view) to get 3D info out of my gestures. Grasshopper detects which object in the Rhino scene I am closest to, and then by pausing on that object, it is automatically selected in Rhino itself (moving out of camera view deselects everything). The goal of this is to create gestural workability within the pre-existing Rhino commands.
Major credit goes to Andy Payne for his Firefly components, and to Andrew Heumann and Chris Tietjen for scripting help.
In other news, the LEAP motion controller’s release date is now May 13….exciting, but I was totally ready to get it in January….ah well, I guess the best thing I can do before then is really put some work into the Rhino-side of things.
One thought on “Gestural object selection in Rhino”