GestureWiz

Simplifying rapid-prototyping for cross-modal, cross-device 2D/3D gestures

Role

As a Research Assistant at the Michigan Information Interaction Lab, University of Michigan - Ann Arbor, prototyped earlier versions of the worker interface for crowdsourcing tasks and collected gesture sets for touch, mouse and AR environments.

Skills

Prototyping HCI Research Multi-Modal Design

Augmented Reality Gesture Recognition Programming

Emerging Technology

This research was later published by Prof. Michael Nebeling and researcher Maximilian Speicher at CHI 2018, Montreal, Canada [Paper]

Gesture Wiz Intro

How do we prototype for 3D gestures?

Designers and researchers often rely on simple gesture recognizers like Wobbrock et al.’s $1 for rapid user interface prototypes. They do so because its 16 gestures are well-established, well-studied, and therefore provide a good baseline. Most existing recognizers aren’t sufficient because:

  • Limited to a particular input modality
  • Have a pre-trained set of gestures
  • Cannot be easily combined with other recognizers

Simply put, the process of creating prototypes that employ advanced touch and mid-air gestures still requires significant technical experience and programming skills.

GestureWiz is a rapid prototyping environment for designers with minimal programming knowledge to work on gesture-based interfaces via a record–recognize–run pattern.

What does this mean?

  • Record 2D/3D gestures using a video-based record–replay tool to form mouse, multi-touch, multi-device, and full-body gesture sets
  • Use Wizard of Oz optionally powered by crowds to recognize gestures from a given set
  • Run resulting human-powered recognizer in user interface prototypes
Worker Interface for gesture identification and gesture recognition tasks

Working

GestureWiz - The Working
  • Rapidly record multi-modal gestures, e.g., multitouch, Kinect, Leap Motion, and mid-air 3D gestures using Requester UI in record mode.
  • Test for conflicts using the built-in conflict checker to resolve ambiguities and adjust the design of the gestures using Worker UI (human-powered gesture recognizer that works without system implementation but rather uses a Wizard of Oz (WOZ) or crowdsourcing approach).
  • Map application commands to any of the defined gestures using the GestureWiz library.
  • Test gestures and perform application tasks powered by human-recognizers in the recognition mode.