Gestural HCI

Advisor: Susana Covarrubias
Duration: 2012 – 2013

During an internship with Gensler Seattle, I conducted a research project on the role of gestural modeling within the design environment. Using a webcam and Grasshopper, along with Andy Payne’s Firefly plugin, I constructed a setup that allowed basic, real-time gestural input to impact the Rhino modeling environment. This setup used the feed of a single webcam, and required conversion of that imagery into an understanding of 3D space. While no depth-sensing information was used, simple pieces of colored paper or tape allowed the image to be quickly transformed into digital geometry.  

Later, I added to this setup, incorporating multiple webcams to achieve basic drawing and modeling tasks, as well as delay-based gestural selection.

Shortly after this work, Leap Motion (now UltraLeap) successfully crowdfunded and launched their low-cost, high-fidelity, depth-sensing controller. Again using Andy Payne’s Firefly plugin, I developed a setup that enabled gesture-based navigation of 3D environments. This setup incorporated intuitive controls for acceleration and deceleration, as well as the basic object selection developed earlier.

Ultimately, the basic computer vision strategy underlying this project – processing imagery using custom-built scripts – formed the basis for my master’s thesis at MIT.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s