ACT | The App Association Welcomes InterKnowlogy CEO Hersh to Its Board

emilie_hershWe have exciting news! Just announced, our CEO Emilie Hersh is the newest addition to ACT | The App Association’s Board of Directors. Emilie’s experience spans more than twenty years with tech startups, consulting, and extensive team leadership across a number of industry verticals.

As InterKnowlogy’s Chief Executive Officer she has successfully grown the company to a highly profitable entity, more than doubling revenue while expanding internationally. The company has become a global leader developing interactive software for startups, Fortune 100 clients, global media outlets, and many of the best-recognized brands in the world.

“We are delighted to welcome Emilie Hersh to our Board of Directors,” said ACT | The App Association President Jonathan Zuck. “Her leadership has produced extraordinary achievements in innovation and she’s long been a valuable resource to our organization. Emilie will be an exemplary addition to our team.”

About ACT | The App Association:

ACT | The App Association represents more than 5,000 small and mid-size software companies in the mobile app economy. The organization advocates for an environment that inspires and rewards innovation while providing resources to help its members leverage their intellectual assets to raise capital, create jobs, and continue innovating.

SoCal Code Camp – Crawl, Walk, Talk – Windows Universal App Lifecycle and Cortana API

This year’s SoCal Code Camp at Cal State Fullerton was a blast! So many great speakers and attendees. It’s nice getting out again.

Huge thanks to the crew that came out today to my talk. It was a great having so many people there! Here is a link to my materials for my talk:

Crawl, Walk, Talk – Windows Universal App Lifecycle and Cortana API.

Hope to see everyone at the next SoCal Code Camp!

InterKnowlogy Employees Give Back To Community

nterknowlogy Employees Day of Giving

InterKnowlogy colleagues volunteered with Habitat for Humanity as part of the company’s Day of Giving event. InterKnowlogy is known for its incredible company culture and community spirit, with employees who are as passionate about creating innovative technology as they are about the place they live. Volunteers pitched in to work on a job site in Escondido, where future family homes were in different phases of construction. Over half the company’s employees participated in the event and spent the day installing roofing, hanging drywall and painting. The sense of teamwork was magnified when the employees joined together to make an impact in the community, creating awareness about the need to improve living conditions for low-income families.

“I have always been an advocate in giving back, and there are so many ways in which we can all do so – even better when we can do it together. The people at IK make it not only possible, but it is the reason we come to work and excel each day. It is truly my pleasure to work side-by-side with everyone here and give back to those in our community who need the most help!”

—Emilie Hersh, CEO of InterKnowlogy

A Day of Giving is organized each quarter by committee and gives employees the opportunity to find volunteer opportunities that can be done as a group. Seeing the great attitudes and desire to help improve the lives of those less fortunate than ourselves is truly rewarding.

Art Center College of Design Interaction Design Workshop

Art Center of Desing LogoInteraction Design (IxD) is used for much more than web sites and simple mobile apps. Companies are solving complex problems, and they can use IxD principles to make sure they are building the right thing the first time without writing novels of requirements documents. Rodney Guzman, Chief Architect and founder of InterKnowlogy will lead an interaction design workshop at the Art Center College of Design. Students will explore how InterKnowlogy works with customers to understand the features they need, and how we echo back what is heard. Students will have the opportunity to listen and observe a real interview with a product stakeholder. Afterwards, the students will work as team to write a proposal for the work. Our goal is to give the students a real life scenario of getting from an idea to effectively communicating a solution.

IMSA Mobile Apps – 2 – Planning For Maximizing Code Re-Use Across iOS, Android, and Universal Apps

While we were busy thinking through the interaction design elements of new IMSA mobile apps, we knew we were going to have to build six apps (iOS, Android, Universal Apps for phone and tablet). The app architecture we chose to follow for this is Model-View-ViewModel (or MVVM). Our mission was to maximize the amount of code sharing across all implementations of the app. The more code sharing we could do, the less code we would have to develop for each platform. Less code means less time to develop and less to test, making the aggressive schedule more achievable.

The Model layer contains the business logic and data that will drive the IMSA app. Data would be served through a scalable cloud infrastructure being constructed for the mobile apps. Regardless of mobile OS, the business logic and data will remain the same. How we access the data in the cloud and retrieve would also remain the same. These layers are devoid of any user interface elements are a logical candidate for re-use across all the mobile operating systems. Perfect – one layer to write one. But we want more.

We were suspicious that the View layer would be so unique across mobile operating systems that the ViewModel layer would not be re-usable. The ViewModel layer is responsible for binding the Model layer (the business logic) to a View (the user interface). Remember we are talking about code sharing across iOS, Android, and Universal Apps – this have to be do so different that writing a consistent and shareable ViewModel layer would not be possible – right? Wrong! After some initial prototyping we were pleasantly surprised. The path we have chosen is going to allow us to use the same code in the ViewModel layer across all operating systems.

From our early calculations, thanks to Visual Studio and Xamarin, we are predicting to see about 75% code re-use (of the non-generated code) across all the implementations! Excellent news for the developers and project manager. We’ll dive into code examples in an upcoming blog, but next we’ll discuss our approach with Azure.  Also, this video has additional information for code re-use with Xamarin.

IMSA Mobile Apps – 1 – Architecture Design Session

The IMSA Mobile Apps project is currently in flight and we are actively working on building this cross platform/cross OS solution. This article is the first in a series of blogs discussing the details of the project, and we’ll be actively trying to catch up to the current day as we are busy building towards the Laguna Seca race.

Back in the first week of December 2014 we flew out to Florida to visit the IMSA team with Microsoft in Daytona Beach. Microsoft was hosting an Architecture Design Session, or ADS for short, to flesh out features of the solution. It became quickly apparent that the solution was layered and complex. Many features discussed have become part of longer product roadmap as IMSA is committed to providing the best experience possible to their fans. Also, it should be noted as in all ideation sessions, some ideas discussed are put deep down in the feature backlog.

I am certain that some would ask why IMSA involved Microsoft. This is a mobile app – what does Microsoft know about building mobile apps across iOS and Android? Well, it turns out quite a lot. From past projects, we already knew the tooling we get with Visual Studio and Xamarin allows us to build amazing mobile apps across all platforms and OS’s. The other side of the coin is the plumbing we get to integrate into cloud infrastructure. This app needed to scale across the huge IMSA fan base during live events. From past projects we knew how effective we could be building scalable mobile apps with Azure. So to IMSA and to us, involving Microsoft made perfect sense.

In the ADS, some of the interesting features starting popping up:

The app would need to change shape depending on whether or not a race is live or not. We thought treating the app almost like the NFL Now app would be interesting. There could be something interesting always to watch on our app, regardless if an event is live or not.

IMSA radio is a live audio stream. The app would need to deliver this feed just like other integrated audio content on your device. So turning on IMSA radio, putting your headphones on, and then your device in your pocket should be as natural as playing music.

Using the device’s GPS, if the race fan is at the event the app should respond differently than if the person were elsewhere. When you are at an event, what you are interested in is different than when you are not.

Telemetry information from the cars. It would be just awesome to watch your favorite car at the event or at home and see all the g-forces they are pulling when they are flying around the corners.

IMSA services to content and structured information are not scalability to a mobile play. A cloud infrastructure would need to be placed in front of the IMSA services so content could be cached and served more quickly.

 

After the ADS we went home and decomposed all the features while looking at the schedule. We needed to pick a race event to target deployment. We had a lot of homework to determine our approach. In this next blog we will be discussing how we were planning on maximizing code re-use across all platforms and OS’s.

Kinect 2.0 Face Frame Data with Other Frame Data

Do you have a project that you are using the Kinect 2.0 Face detection as well as one, or more, of the other feeds from the Kinect? Well I am, and I was having issues with obtaining all the Frames I wanted from the Kinect. Let’s start with a brief, highlevel, overview, I had a need to obtain the data relating to the Color Image, the Body Tracking, and the Face Tracking. Seems very straight forward until I realized that the Face Data was not included in the MultiSourceFrameReader class. That reader only provided me the Color and Body frame data. In order to get the Face data I needed to use a FaceFrameReader. Which required me to listen for the arrival of two frame events.

For example, I was doing something like this.

public MainWindowViewModel()
{
	_kinectSensor = KinectSensor.GetDefault();

	const FrameSourceTypes frameTypes = FrameSourceTypes.Color | FrameSourceTypes.Body;
	MultiSourceFrameReader kinectFrameReader = _kinectSensor.OpenMultiSourceFrameReader( frameTypes );

	kinectFrameReader.MultiSourceFrameArrived += OnMultiSourceFrameArrived;

	const FaceFrameFeatures faceFrameFeatures = 
		  FaceFrameFeatures.BoundingBoxInColorSpace
		| FaceFrameFeatures.RotationOrientation
		| FaceFrameFeatures.FaceEngagement;

	faceFrameSource = new FaceFrameSource( _kinectSensor, 0, faceFrameFeatures );
	faceFrameReader = faceFrameSources.OpenReader();
	faceFrameReader.FrameArrived += OnFaceFrameArrived;

	_kinectSensor.IsAvailableChanged += OnSensorAvailableChanged;
	_kinectSensor.Open();

}

private void OnMultiSourceFrameArrived( object sender, MultiSourceFrameArrivedEventArgs e )
{
	//Process Color and Body Frames
}

private void OnFaceFrameArrived( object sender, FaceFrameArrivedEventArgs e )
{
	//Process Face frame
}

In theory, this should not be a problem because the Kinect is firing off all frames at around 30 per second and, I assume, in unison. However, I was running into the issue that if my processing of the Face data took longer then the 30th of a second I had to process, the color or body data could be at a different point in the cycle. What I was seeing was images that appeared to be striped between two frames. Now I understand that this behavior could be linked of various issues that I am not going to dive into in this post. But what I had noticed was the more processing I tried to pack in to the Face Frame arrival handler, the more frequently I saw bad images. It is worth noting that my actual project will process all six of the faces that the Kinect can track, and when having to iterate through more then one face per frame, the bad, striped, images were occurring more then good images. This lead me to my conclusion (and my solution lead me to write this blog.)

I also did not like the above approach because it forced me to process frames in different places, and possibly on different cycles. So when something wasn’t working I had to determine which Frame was the offender, then go to that processing method. No bueno.

In trouble shooting the poor images, I had the thought “I just want the color and frame that the Face frame using.” Confused? I’ll try to explain. Basically, the Kinect Face tracking is using some conglomeration of the basic Kinect Feeds (Color, Depth, Body) to figure out what is a face, and the features of that face. I know this because if a body is not being tracked, a face is not being tracked. The depth is then use to track if the eyes are open or closed and other intricacies of the face. Anyways, back on track, I had a feeling that the Kinect Face Frame had, at least, some link back to the other frames that were used to determine the state of the face for that 30th of a second. That is when I stumbled upon FaceFrame.BodyFrameReference and FaceFrame.ColorFrameReference (FaceFrame.DepthFrameReference also exsits, it’s just not needed for my purposes). From those references you can get the respective frames.

After my epiphany my code turned into:

public MainWindowViewModel()
{
	_kinectSensor = KinectSensor.GetDefault();

	//Multiframe reader stuff was here.
	//It is now gone.

	const FaceFrameFeatures faceFrameFeatures = 
		  FaceFrameFeatures.BoundingBoxInColorSpace
		| FaceFrameFeatures.RotationOrientation
		| FaceFrameFeatures.FaceEngagement;

	faceFrameSource = new FaceFrameSource( _kinectSensor, 0, faceFrameFeatures );
	faceFrameReader = faceFrameSources.OpenReader();
	faceFrameReader.FrameArrived += OnFaceFrameArrived;

	_kinectSensor.IsAvailableChanged += OnSensorAvailableChanged;
	_kinectSensor.Open();

}

private void OnFaceFrameArrived( object sender, FaceFrameArrivedEventArgs e )
{
	...
	var bodyReference = faceFrame.BodyFrameReference;
	BodyFrame bodyFrame = bodyReference.AcquireFrame();

	var colorReference = faceFrame.ColorFrameReference;
	ColorFrame colorFrame = colorReference.AcquireFrame();
	//Process Face frame
	...
}

private void ProcessBodyAndColorFrames( BodyFrame bodyFrame, ColorFrame colorFrame )
{
	//Process Body and Color Frames
	...
}

I still have the processing methods split out similarly to how I had them in the first (bad) way, but with this approach I am a little more confident that the Color and Body Frames and I analyzing are the same ones used by the Face processor. I am also more free to split up the processing as I see fit. The color and body are really only lumped together because that is how I was doing it before. In the future they might be split up and done in parallel, who knows.

And with that, the occurrence of bad images appears to be greatly reduced. At least for now. We will see how long it lasts. I still get some bad frames, but I am at least closer to being able to completely blame the Kinect, or poor USB performance, or something other than me (which is my ultimate goal.)

Not a big thing, just something I could not readily find on the web.

Kinect Development (Face tracking) – Without a Kinect

In a previous post I talked about how you can use Kinect Studio v2 Studio software to “play back” a recorded file that contains Kinect data. Your application will react to the incoming data as if it were coming from a Kinect, enabling you to develop software for a Kinect without actually having the device.

This of course requires that you have a recorded file to playback. Keep reading…

More specifically, Kinect for Windows v2 supports the ability to track not only bodies detected in the camera view, but tracking FACES. Even better, there are a number of properties on the detected face metadata that tell you if the person is:

  • looking away from the camera
  • happy
  • mouth moving
  • wearing glasses
  • …etc…

Here at IK, we have been doing a lot of Kinect work lately. It turns out the Kinect v2 device and driver are super picky when it comes to compatible USB 3 controllers. We have discovered that our laptops (Dell Precision m4800) do not have one of the approved controllers. Through lots of development trial and error, we have narrowed this down to mostly being a problem only with FACE TRACKING (the rest of the Kinect data and functionality seem to work fine).

So … even though I have a Kinect, if I’m working on face tracking, I’m out of luck on my machine in terms of development. However, using the technique described in the previous post, I can play back a Kinect Studio file and test my software just find.

To that end, we have recorded a short segment of a couple of us in view, with and without faces engaged, happy, looking and not, … and posted it here for anyone to use in their Kinect face tracking software. This recording has all the feeds turned on, including RGB, so it’s a HUGE file. Feel free to download it (below) and use it for your Kinect face tracking development.

DOWNLOAD HERE: Kinect recorded file – 2 faces, all feeds. (LARGE: ~4.4GB zipped)

Kinect Recording - 2 faces

Kinect Recording – 2 faces

Hope that helps!

Sideloading Windows Store Apps – When Unlimited Has a Limit

In part 1 and part 2 of this series, I describe how & where to buy a Windows Sideloading Key and then how to configure a machine to sideloading your “store” application.

I did not think there was another part to this story … until I checked my license and it’s number of activations. The license you purchase and use to sideload Windows store applications is supposed to be for an “unlimited number of devices“.

Unlimited Devices

MS Claim of Activations on Unlimited Devices

You can imagine my surprise and frustration when I saw in the Volume License Service Center that I had burned through 7 of 25 activations in the first few days!!

Long story short, after a few emails with the VLSC, they said they set the number of activations on that “UNLIMITED” license to 25 “for Microsoft tracking purposes on how many times the product has been used“. In the event you run out, you can request more activations by contacting the MAK team.

I do NOT want to be in production and getting calls from a customer that can no longer sideload the application because we have reached the maximum number of activations. Sure enough, it took another couple emails, but the MAK team was “happy” to increase the number… to 225. Still not unlimited, but a somewhat large number that I will someday likely have to increase again.

225 Activations

225 Activations

Where I uncovered the answers

      vlserva -at- microsoft.com
      MAKAdd -at- microsoft.com

Kinect Development Without a Kinect

Huh? How can you develop software that integrates with the Microsoft Kinect if you don’t have a physical Kinect? We have a number of Kinect devices around the office, but they’re all in use. I want to test and develop on an application we’re writing, … there is another way.

Enter Kinect Studio v2.0. This application is installed with the Kinect v2.0 SDK, and allows you to record and playback streams from the Kinect device. It’s usually used to debug a repeatable scenario, but we’ve been using it to spread the ability to develop Kinect-enabled applications to engineers that don’t have a physical Kinect device. There are just a couple settings to be aware of to get this to work.

Someone has to record the streams in the first place. They can select which streams (RGB, Depth, IR, Body Index, etc. list of streams shown below) to include in the recording. The recording is captured in an XEF file that can get large quickly depending on what streams are included (on the order of 4GB+ for 1 minute). Obviously, you need to include the streams that you’re looking to work with in the application you’re developing.

Streams to Capture

Choose from many streams to include in the recording

So I have my .XEF file to playback, what next?

  • Open the XEF file in Studio.
  • Go to the PLAY tab
  • IMPORTANT: Select which of the available streams you want playback to contain (see screenshot below)
  • Click the settings gear next to the playback window, and select what output you want to see during playback. This does not affect what you’re application code receives from the Kinect. It controls display in the Studio UI only.
  • Click the Connect to Service button
  • Click PLAY

You should now start getting Kinect events in your application code.

Here’s what my studio UI looks like (with highlights calling out where to change settings).
Hope that helps.

Kinect Studio UI

Kinect Studio UI