This project is read-only.

UML Modelling with Touchless SDK

Feb 18, 2010 at 12:19 AM
Edited Feb 18, 2010 at 4:11 AM

Greetings to you all,


I am a student currently studying at Bournemouth University. I have decided for my project that I will attempt to use the Touchless SDK to create a Silverlight based UML modelling tool.  Hopefully, if things go well, I will be able to make this a collaborative tool but only if there is time.


For those of you that are interested in following my progress, I will create a blog shortly after this post and then edit to include its location.

EDIT: Blog can be found here ( For those of you that intend to read it, please keep in mind that I am required to maintain this blog by my university and that it is used for task and deadline definition and tracking and not a typical discussion type blog. This is also my first serious project and I have no way (or really idea) about how id go about hosting a Silverlight applications so that I can show demo's and progress to you guys or anyone outside localhost.  I really would like to though but could only spend a maximum of £5-7 a month to host it as i am student-poor.


After playing around with the demo included along-side the SDK download, it is clear that while the SDK works well with large objects of a colour that is not abundant in the video's background, it cannot define marker locations accurately enough (with default settings) to facilitate respectable drag-and-drop functionality to GUI elements.

However, if one fiddles with the Camera settings and turns the exposure right down so that the image on-screen is entirely black (although only just), and then points a TV remote control (with your finger held down on a button) at the camera, then you are able to define this source as a marker and draw on screen by moving the controller about quite fast and still it draws a continuous line.  Ultimately I intend for the user of my application to wear a lightweight glove. I had originally thought that each finger could have a coloured L.E.D on each finger-tip but after playing with the demo app, I feel i will have to have each finger-tip giving out a specific frequency of IR light (or simply have control for turning each tip on/off) in order to get a finer level of control over moving the markers.


So my first question to this community is, How can I set-up a Camera control panel to set the exposure of the camera to way below what would give a decent image to the human eye ?  Would my Silverlight app need to adjust the image before sending it for processing to the TouchlessLib or is there a way of telling whatever is inside the Touchless code to look at the image with a reduced level of exposure ?


FURTHER EDIT: I have now spent some time playing around with SL4 and its webcam abilities.  I thought id try and get (what was simple in Win Forms) a marker tracking demo up using SL4 and Touchless.  Of course, a good starting point for this would be the demo app posted in a discussion here. I downloaded and opened it and...well...I don't understand it.  From looking at the code and google'ing some terms I think its an app built against a MVVM architecture, something very new and foreign to my simple MVP mind.  Is this level of complexity really what is required to get Touchless' marker tracking over SL4 ?  Or is there a more simple way.  All that is really needed is a way of getting a System.Windows.Media.CaptureSource into a TouchlessLib.Camera and we're away right ? Then, all we need to do is get a Silverlight 4 Rectangle to fill with a System.Drawing.Bitmap pulled from TouchlessManager's current camera and not the strange WriteableBitmap or BitmapImage SL4 insists on using.

Thank you for your time on this matter and I look forward to working with the Touchless SDK.