This post has been republished via RSS; it originally appeared at: Channel 9.
It's been a long time since we've highlighted the Visual Gesture Builder (VGB), Custom Gestures, Kinect for Windows v2 and the Visual Gesture Builder and Saluting the Visual Gesture Builder - Details and example code.
It's a great tool and one that can really help streamline your Kinect development, so it's time to look at it again...
Visual Gesture Builder (VGB) generates data that applications use to perform gesture detection at run time. Even for simple cases, gesture detection is a challenging task that may require many lines of code to obtain reliable results, considering all of the different users and spaces that an application might encounter. By using a data-driven model, VGB shifts the emphasis from writing code to building gesture detection that is testable, repeatable, configurable, and database-driven. This method provides better gesture recognition and reduces development time.
VGB uses a number of detection technologies. The user selects the detection technologies to use—namely AdaBoostTrigger or RFRProgress—and tags frames in a clip related to a meaningful gesture, such as a punch or a kick. At the end of the tagging process, VGB builds a gesture database; with this database, an application can process body input from a user to, for example, detect a hit or swing progress.
The following are some of the results of using VGB:
- Mitigating the current code-intensive approach.
- Reusing gesture definitions in different versions of the same application.
- Sharing gesture definitions among applications.
- Creating a library of gesture definitions.
- Improving the predictability and reliability of gesture definitions.
- Testing framework for gesture detection.
- Analyzing and visualizing gesture-detection results.
- Managing a large quantity of data in a user friendly IDE.
Using Visual Gesture Builder
Project Information URL: https://msdn.microsoft.com/en-us/library/dn785529.aspx