I recently spent some time working on project that utilizes SoftKinetic cameras through the Iisu SDK, more specifically the Iisu.net wrapper. For those of you who don’t know what SoftKinetic is, the cameras operate similar to Kinect. The cameras attempt to detect and track people who are in their field view. This allows me, the developer, to create applications that react to the user without the need of a tradition input devices (mouse, keyboard, etc.) This it typically done though known gestures, but does not have to be.
The first step in using the Iisu SDK, is getting a device that is Iisu compatible. For my most recent project, I was working with the Panasonic EKL3106 and the SoftKinetic DS311. The EKL3106, the DS311 and the Kinect are very similar in appearance, and actually very similar in function and what data they each can provide. When using the EKL3106 and DS311 in conjunction with the Iisu SDK, I was able to detect and track people, as well as recognize gestures. The Iisu SDK is available through the SoftKinetic website, and does require licensing.
Getting started with Iisu.net was time consuming, and a bit frustrating. Currently for Iisu.net, the documentation is very limited. The first lesson I had to learn was to not rely on the C# documentation; instead look at the C++ documentation, and learn how to translate that information to work for your purposes in the C#/.Net world. The C# documentation’s usefulness really was limited to getting a basic connection to the hardware, beyond that, the C++ documentation was all I could rely on. The process of starting Iisu.net and the camera relies on the use of case sensitive string literals. This makes the learning process of what can be done much more difficult. Enabling a feature of the camera, or subscribing to some data feed from the camera, required me to look in the documentation for the exact string to pass in. Although not difficult, just not as intuitive as I had hoped and required a bit of investigation. Once that was sorted out, the Iisu.net wrapper provides an easy interface to get data from the camera.
I feel the user as a controller technology still has a lot of room to improve. Too often the camera would report false positives and would track a person that was not there. This prevented the camera from detecting when there was an actual user in front of the camera. The opposite problem also occurs. The camera sometimes has difficultly detecting when a person is in front of it. In general, tracking people needs to get more reliable. The Iisu needs to be able to see at least from head to waist to detect a user, and is most accurate when place above head level looking down. That being said, once a user is detected, the tracking of various joints is pretty accurate, and often to an impressive degree. Another positive of the Iisu SDK, is the built in gesture recognition. The Iisu SDK has a concept of a controller built in. What this means, is that if a person is in front of the camera and performs a known gesture with their hand (wave or circle) a “controller” at the gesture location. This feature allows me to track the users hand, as a controller, pretty accurately. This method is the route I went in the my most recent project and works very well. It makes my life much easier to do simple hand tracking and gesture recognition.
I like the idea of touchless interaction and the “magical” feel it provides, but after using the technology, it needs to mature a bit in order to get more reliable. I am excited to see what the next generation of hardware will bring.