People Watching, with SoftKinetic and Iisu

I recently spent some time working on project that utilizes SoftKinetic cameras through the Iisu SDK, more specifically the wrapper. For those of you who don’t know what SoftKinetic is, the cameras operate similar to Kinect. The cameras attempt to detect and track people who are in their field view. This allows me, the developer, to create applications that react to the user without the need of a tradition input devices (mouse, keyboard, etc.) This it typically done though known gestures, but does not have to be.

Getting Started

The first step in using the Iisu SDK, is getting a device that is Iisu compatible. For my most recent project, I was working with the Panasonic EKL3106 and the SoftKinetic DS311. The EKL3106, the DS311 and the Kinect are very similar in appearance, and actually very similar in function and what data they each can provide. When using the EKL3106 and DS311 in conjunction with the Iisu SDK, I was able to detect and track people, as well as recognize gestures. The Iisu SDK is available through the SoftKinetic website, and does require licensing.

Development Experience

Getting started with was time consuming, and a bit frustrating. Currently for, the documentation is very limited. The first lesson I had to learn was to not rely on the C# documentation; instead look at the C++ documentation, and learn how to translate that information to work for your purposes in the C#/.Net world. The C# documentation’s usefulness really was limited to getting a basic connection to the hardware, beyond that, the C++ documentation was all I could rely on. The process of starting and the camera relies on the use of case sensitive string literals. This makes the learning process of what can be done much more difficult. Enabling a feature of the camera, or subscribing to some data feed from the camera, required me to look in the documentation for the exact string to pass in. Although not difficult, just not as intuitive as I had hoped and required a bit of investigation. Once that was sorted out, the wrapper provides an easy interface to get data from the camera.


I feel the user as a controller technology still has a lot of room to improve. Too often the camera would report false positives and would track a person that was not there. This prevented the camera from detecting when there was an actual user in front of the camera. The opposite problem also occurs. The camera sometimes has difficultly detecting when a person is in front of it. In general, tracking people needs to get more reliable. The Iisu needs to be able to see at least from head to waist to detect a user, and is most accurate when place above head level looking down. That being said, once a user is detected, the tracking of various joints is pretty accurate, and often to an impressive degree. Another positive of the Iisu SDK, is the built in gesture recognition. The Iisu SDK has a concept of a controller built in. What this means, is that if a person is in front of the camera and performs a known gesture with their hand (wave or circle) a “controller” at the gesture location. This feature allows me to track the users hand, as a controller, pretty accurately. This method is the route I went in the my most recent project and works very well. It makes my life much easier to do simple hand tracking and gesture recognition.

I like the idea of touchless interaction and the “magical” feel it provides, but after using the technology, it needs to mature a bit in order to get more reliable. I am excited to see what the next generation of hardware will bring.

2 thoughts on “People Watching, with SoftKinetic and Iisu

  1. hi, scottyoung,i’m new to iisu,and i’m trying to use the wrapper too.As you mentioned,the C# doc is really limited,i mainly rely on the C++doc too.
    I’ve managed to get the depth and rgb image and show them on the window.Now i have trouble in showing the skeleton on the rgb image.The positon of the jonit confused me. in kinect sdk ,it has the method to convert the 3D positon to 2D point .but in iisu,there is no such method.and the XYZ positon is not the same as kinect.In kinect ,sensor itself is the origin point(0,0,0). In iisu ,X is the same as kinect, Z is the vertical positon i assume,then Y should be the distance from sensor,but it has negative value.confusing…do you know the meaning of the XYZ position in iisu.

    forward to your reply.

    • Hello Luo,

      It has been a few months since I have worked with iisu and I no long have the documentation on my computer to review, so I am relying on my memory at this point, which may no be all that great. From what I remember the iisu coordinate space was a bit difficult to initially get figured out, and you are correct when you say that the X axis is that same to Kinect and the Z axis is different. As for why the Y axis is reporting a negative, I do not have a good answer for that. In the work I did with iisu the Y position of my skeleton was not used, so it was basically ignored. What I would do is somehow log the reported position to figure out what is the range of the values. I suspect the origin is not the camera, but is is the middle “scene” that the camera is looking at. This would mean that if you are between the camera and the center point of the scene, you will get a negative number. There are different way to set up the scene and from I remember there being some calibration software that come packaged with the SDK that might shed some light on this. I also remember there being some scene calibration documentation in the C++ which was fairly easy to translate to the C# wrapper.

      Sorry I do not have a simple and direct answer for you, but I hope I have shed a tiny bit of light on your problem to help you out.

Leave a Reply

Your email address will not be published. Required fields are marked *