This project is read-only.

Reduction of "outlier" marker pixel

Jul 9, 2009 at 3:30 PM


thank you very much for this nice project...I used this SDK a lot the last semester at my university for developing a
gesture recognition system which works for XNA Games as well as for Windows applications. Anyway I encountered
some drawbacks while using the Touchless.

In some light conditions pixels in the color range of a marker appear in the background or people
wearing the same color as a marker walk into the field of view of the camera and so
disturb the correctness of the marker position. So I would like to try to improve this somehow...

My idea now is to detect, based on the previous data of a marker, sudden changes in the size of a marker
or the center of a marker. This should happen while the marker is recognized by the UpdateMarker() function...

So with the knowledge of the last position and area of a marker the UpdateMarker() function should be able to
reject recognizing pixels as marker pixel if they are outside of the last marker size + a maximum deviation.
Basically it should be an outlier detection :-)

My problem is, that I'm not 100% sure where to start with that and I'm also not 100% sure if I understood all the code
in the UpdateMarker() function.

It would be great if someone could help me finding a starting point or explaining me the function a bit more precise...
I'm not that good with HSV colors...

Looking forward for some answers :-)


Jul 9, 2009 at 6:38 PM

Hi Jonas,

your motion recognition sounds really interesting and I would love to have a glance at your code or project, can you share some more information or even your project with the community?

Now to the UpdateMarker() function:

(all line numbers valid for SVN rev. 33989)

The if-Block starting in line 399 is entered when the currently checked pixel at position (variables x,y) is "detected" for the current marker (variable marker).

The marker properties set in the Block represent the following:

  • Area: total amount of pixels detected for this marker
  • X: Center X-position (The Average is calculated after the image loop for performance reasons by dividing through "Area")
  • Y: Center Y-position (The Average is calculated after the image loop for performance reasons by dividing through "Area")
  • Top, Bottom, Left, Right: The Marker outline rectangle

The code starting in line 425 is used to draw the search box for every marker. The search box is the area in which a marker may be detected in this processing turn. When  a pixel has marker colors, but lies outside of this box, it will not be detected as marker pixel.

Maybe this is the point where you can attach your modifications: create a better search box for a marker by changing the preProcessMarker() method and/or adapting the UpdateMarker code to use more intelligent search boxes. Maybe you can use non-rectangular search boxes or use smaller search boxes...


You will have noticed in my post that there already is code to reduce the search area. You can make it shown by setting the Highlight-Property of the marker to "true". The search box is shown in thecam image with the marker representative color. Keep in mind that this box is NOT the marker area rectangle. This can be a bit confusing...


I just added a path to the Source Code section of this project thatmodifies the Lib and demo app so the whole marker rectangle can be highlighted, not only the detected pixels. (patch #3289)

Jul 21, 2009 at 6:26 AM

Hey Jonas,

eFloh's description of hte detection process is really good. Is there anything else we can clear up?

- Mike

Jul 21, 2009 at 12:49 PM

Hey eFloh and Mike...

no worries, your description is very good and I already started to try out some things...anyway I don't have enough time right now, because I'm writing on the theoretical things behind my master thesis. When I'm finished I will publish the project and some more details about it. There are a lot of enhancements for my gesture recognition system I could imagine so this might be a good place to start from.

Some basics of what the system is already able to do:

  • semi-automatic setup of markers in the beginning
  • two "click" - modes for marker: one for drawing gestures and one for interacting with forms (a usual mouse click mode)
  • recognizing of 3 gesture types: pointing gestures, one-marker gestures and multi-marker gestures
  • assigning program functions to specfic gestures (or assigning key-shortcuts to gestures)
  • loading and saving of gestures via xml file

This should be enough for the first run. I have to finish my theoretical part till the 17th of August, so I will post more after this date :-)