IMSA

Somewhat lost in the jaw-dropping awesome announcements at Build 2015 was going live on a pretty incredible suite of Mobile apps for IMSA (The International Motor Sports Association).

What we built in such a short space of time is nothing short of awesome. From idea to production on the cutting edge: an app that runs cross platform, built by our team in a little over 3 months

clip_image002

The App Experience

The new app includes enhanced features such live broadcast television, IMSA radio, live timing and scoring of the race events, multiple in-car cameras feeds, live tweets and custom viewing options for fans to access straight from their device.

Through a collaborative effort InterKnowlogy partnered with IMSA and Microsoft to deliver a new, game-changing, multi-platform mobile experience for all of IMSA’s fans, both at the race and from wherever they watch the action. 

Technical Stack

VS and MSFT stuff

Visual Studio 2013

Visual Studio Online

Resharper 9

Windows 8.1 Universal Phone and Tablet

Azure stuff

Azure Mobile Services

Azure Redis Cache

Azure App Insights

Xamarin stuff

Xamarin.Forms 1.4

Xamarin Studio

Xamarin Insights

Other stuff

Akamai CDN

iOS 8

Android Kitkat

PhoneSM.CodePlex.com (HLS video streaming media)

JSON.net

Little Watson

Slack

Get the app on your own device

Download the Apps for your devices here:

Windows Phone

Windows Tablet

Apple

Android

About IMSA:

The International Motor Sports Association, LLC (IMSA) was originally founded in 1969 with a long and rich history in sports car racing. Today, IMSA is the sanctioning body of the TUDOR United SportsCar Championship, the premier sports car racing series in North America. IMSA also sanctions the Continental Tire SportsCar Challenge and the Cooper Tires Prototype Lites Powered by Mazda, as well as four single-make series: Porsche GT3 Cup Challenge USA by Yokohama; Ultra 94 Porsche GT3 Cup Challenge Canada by Michelin; Ferrari Challenge North America; and Lamborghini Super Trofeo North America. IMSA – a company within the NASCAR Group – is the exclusive strategic partner in North America with the Automobile Club de l’Ouest (ACO) which operates the 24 Hours of Le Mans as a part of the FIA World Endurance Championship. The partnership enables selected TUDOR Championship competitors to earn automatic entries into the prestigious 24 Hours of Le Mans.

It’s that time again…..off to DC!

Halfway there – I’m traveling to Washington, D.C. to meet and educate elected officials and regulators on the booming tech industry. As a leader in the field of custom app development, InterKnowlogy – and so many of our clients – are greatly affected by the policies that our elected officials author and support. It’s an important reminder that our voices DO count, people listen, and we each have a responsibility to get involved.

As part of ACT | The App Association’s annual fly-in, I’m joining more than 50 small tech companies from across the country to advocate for an environment that encourages innovation and inspires growth.  Our message is simple. Small companies like ours are creating solutions that are improving lives, creating jobs, and invigorating our economy.  The creativity that comes from these discussions is amazing, and the energy inspires each of us to go out and craft our message in a way that is unique to each of our environments.

Policymakers in Washington must understand issues threatening small tech companies to ensure growth continues. The concerns we will raise next week include data privacy and security, internet governance, intellectual property and patent reform, mobile health regulation, and regulatory obstacles to growth. These are important issues for which the federal government is considering taking action.

I look forward to meeting with my elected officials and others in Washington to educate them about the tech industry so they can make the right decisions about our future. Hopefully, an informed Congress will help entrepreneurs continue to flourish.

Follow our activities at www.ACTonline.org, or via Twitter @EmilieHersh, and @ACTOnline – come join in the fun as we make a difference in our collective future!

 

 

 

 

 

ACT | The App Association Welcomes InterKnowlogy CEO Hersh to Its Board

emilie_hershWe have exciting news! Just announced, our CEO Emilie Hersh is the newest addition to ACT | The App Association’s Board of Directors. Emilie’s experience spans more than twenty years with tech startups, consulting, and extensive team leadership across a number of industry verticals.

As InterKnowlogy’s Chief Executive Officer she has successfully grown the company to a highly profitable entity, more than doubling revenue while expanding internationally. The company has become a global leader developing interactive software for startups, Fortune 100 clients, global media outlets, and many of the best-recognized brands in the world.

“We are delighted to welcome Emilie Hersh to our Board of Directors,” said ACT | The App Association President Jonathan Zuck. “Her leadership has produced extraordinary achievements in innovation and she’s long been a valuable resource to our organization. Emilie will be an exemplary addition to our team.”

About ACT | The App Association:

ACT | The App Association represents more than 5,000 small and mid-size software companies in the mobile app economy. The organization advocates for an environment that inspires and rewards innovation while providing resources to help its members leverage their intellectual assets to raise capital, create jobs, and continue innovating.

SoCal Code Camp – Crawl, Walk, Talk – Windows Universal App Lifecycle and Cortana API

This year’s SoCal Code Camp at Cal State Fullerton was a blast! So many great speakers and attendees. It’s nice getting out again.

Huge thanks to the crew that came out today to my talk. It was a great having so many people there! Here is a link to my materials for my talk:

Crawl, Walk, Talk – Windows Universal App Lifecycle and Cortana API.

Hope to see everyone at the next SoCal Code Camp!

InterKnowlogy Employees Give Back To Community

nterknowlogy Employees Day of Giving

InterKnowlogy colleagues volunteered with Habitat for Humanity as part of the company’s Day of Giving event. InterKnowlogy is known for its incredible company culture and community spirit, with employees who are as passionate about creating innovative technology as they are about the place they live. Volunteers pitched in to work on a job site in Escondido, where future family homes were in different phases of construction. Over half the company’s employees participated in the event and spent the day installing roofing, hanging drywall and painting. The sense of teamwork was magnified when the employees joined together to make an impact in the community, creating awareness about the need to improve living conditions for low-income families.

“I have always been an advocate in giving back, and there are so many ways in which we can all do so – even better when we can do it together. The people at IK make it not only possible, but it is the reason we come to work and excel each day. It is truly my pleasure to work side-by-side with everyone here and give back to those in our community who need the most help!”

—Emilie Hersh, CEO of InterKnowlogy

A Day of Giving is organized each quarter by committee and gives employees the opportunity to find volunteer opportunities that can be done as a group. Seeing the great attitudes and desire to help improve the lives of those less fortunate than ourselves is truly rewarding.

Art Center College of Design Interaction Design Workshop

Art Center of Desing LogoInteraction Design (IxD) is used for much more than web sites and simple mobile apps. Companies are solving complex problems, and they can use IxD principles to make sure they are building the right thing the first time without writing novels of requirements documents. Rodney Guzman, Chief Architect and founder of InterKnowlogy will lead an interaction design workshop at the Art Center College of Design. Students will explore how InterKnowlogy works with customers to understand the features they need, and how we echo back what is heard. Students will have the opportunity to listen and observe a real interview with a product stakeholder. Afterwards, the students will work as team to write a proposal for the work. Our goal is to give the students a real life scenario of getting from an idea to effectively communicating a solution.

Kinect 2.0 Face Frame Data with Other Frame Data

Do you have a project that you are using the Kinect 2.0 Face detection as well as one, or more, of the other feeds from the Kinect? Well I am, and I was having issues with obtaining all the Frames I wanted from the Kinect. Let’s start with a brief, highlevel, overview, I had a need to obtain the data relating to the Color Image, the Body Tracking, and the Face Tracking. Seems very straight forward until I realized that the Face Data was not included in the MultiSourceFrameReader class. That reader only provided me the Color and Body frame data. In order to get the Face data I needed to use a FaceFrameReader. Which required me to listen for the arrival of two frame events.

For example, I was doing something like this.

public MainWindowViewModel()
{
	_kinectSensor = KinectSensor.GetDefault();

	const FrameSourceTypes frameTypes = FrameSourceTypes.Color | FrameSourceTypes.Body;
	MultiSourceFrameReader kinectFrameReader = _kinectSensor.OpenMultiSourceFrameReader( frameTypes );

	kinectFrameReader.MultiSourceFrameArrived += OnMultiSourceFrameArrived;

	const FaceFrameFeatures faceFrameFeatures = 
		  FaceFrameFeatures.BoundingBoxInColorSpace
		| FaceFrameFeatures.RotationOrientation
		| FaceFrameFeatures.FaceEngagement;

	faceFrameSource = new FaceFrameSource( _kinectSensor, 0, faceFrameFeatures );
	faceFrameReader = faceFrameSources.OpenReader();
	faceFrameReader.FrameArrived += OnFaceFrameArrived;

	_kinectSensor.IsAvailableChanged += OnSensorAvailableChanged;
	_kinectSensor.Open();

}

private void OnMultiSourceFrameArrived( object sender, MultiSourceFrameArrivedEventArgs e )
{
	//Process Color and Body Frames
}

private void OnFaceFrameArrived( object sender, FaceFrameArrivedEventArgs e )
{
	//Process Face frame
}

In theory, this should not be a problem because the Kinect is firing off all frames at around 30 per second and, I assume, in unison. However, I was running into the issue that if my processing of the Face data took longer then the 30th of a second I had to process, the color or body data could be at a different point in the cycle. What I was seeing was images that appeared to be striped between two frames. Now I understand that this behavior could be linked of various issues that I am not going to dive into in this post. But what I had noticed was the more processing I tried to pack in to the Face Frame arrival handler, the more frequently I saw bad images. It is worth noting that my actual project will process all six of the faces that the Kinect can track, and when having to iterate through more then one face per frame, the bad, striped, images were occurring more then good images. This lead me to my conclusion (and my solution lead me to write this blog.)

I also did not like the above approach because it forced me to process frames in different places, and possibly on different cycles. So when something wasn’t working I had to determine which Frame was the offender, then go to that processing method. No bueno.

In trouble shooting the poor images, I had the thought “I just want the color and frame that the Face frame using.” Confused? I’ll try to explain. Basically, the Kinect Face tracking is using some conglomeration of the basic Kinect Feeds (Color, Depth, Body) to figure out what is a face, and the features of that face. I know this because if a body is not being tracked, a face is not being tracked. The depth is then use to track if the eyes are open or closed and other intricacies of the face. Anyways, back on track, I had a feeling that the Kinect Face Frame had, at least, some link back to the other frames that were used to determine the state of the face for that 30th of a second. That is when I stumbled upon FaceFrame.BodyFrameReference and FaceFrame.ColorFrameReference (FaceFrame.DepthFrameReference also exsits, it’s just not needed for my purposes). From those references you can get the respective frames.

After my epiphany my code turned into:

public MainWindowViewModel()
{
	_kinectSensor = KinectSensor.GetDefault();

	//Multiframe reader stuff was here.
	//It is now gone.

	const FaceFrameFeatures faceFrameFeatures = 
		  FaceFrameFeatures.BoundingBoxInColorSpace
		| FaceFrameFeatures.RotationOrientation
		| FaceFrameFeatures.FaceEngagement;

	faceFrameSource = new FaceFrameSource( _kinectSensor, 0, faceFrameFeatures );
	faceFrameReader = faceFrameSources.OpenReader();
	faceFrameReader.FrameArrived += OnFaceFrameArrived;

	_kinectSensor.IsAvailableChanged += OnSensorAvailableChanged;
	_kinectSensor.Open();

}

private void OnFaceFrameArrived( object sender, FaceFrameArrivedEventArgs e )
{
	...
	var bodyReference = faceFrame.BodyFrameReference;
	BodyFrame bodyFrame = bodyReference.AcquireFrame();

	var colorReference = faceFrame.ColorFrameReference;
	ColorFrame colorFrame = colorReference.AcquireFrame();
	//Process Face frame
	...
}

private void ProcessBodyAndColorFrames( BodyFrame bodyFrame, ColorFrame colorFrame )
{
	//Process Body and Color Frames
	...
}

I still have the processing methods split out similarly to how I had them in the first (bad) way, but with this approach I am a little more confident that the Color and Body Frames and I analyzing are the same ones used by the Face processor. I am also more free to split up the processing as I see fit. The color and body are really only lumped together because that is how I was doing it before. In the future they might be split up and done in parallel, who knows.

And with that, the occurrence of bad images appears to be greatly reduced. At least for now. We will see how long it lasts. I still get some bad frames, but I am at least closer to being able to completely blame the Kinect, or poor USB performance, or something other than me (which is my ultimate goal.)

Not a big thing, just something I could not readily find on the web.

Kinect Development (Face tracking) – Without a Kinect

In a previous post I talked about how you can use Kinect Studio v2 Studio software to “play back” a recorded file that contains Kinect data. Your application will react to the incoming data as if it were coming from a Kinect, enabling you to develop software for a Kinect without actually having the device.

This of course requires that you have a recorded file to playback. Keep reading…

More specifically, Kinect for Windows v2 supports the ability to track not only bodies detected in the camera view, but tracking FACES. Even better, there are a number of properties on the detected face metadata that tell you if the person is:

  • looking away from the camera
  • happy
  • mouth moving
  • wearing glasses
  • …etc…

Here at IK, we have been doing a lot of Kinect work lately. It turns out the Kinect v2 device and driver are super picky when it comes to compatible USB 3 controllers. We have discovered that our laptops (Dell Precision m4800) do not have one of the approved controllers. Through lots of development trial and error, we have narrowed this down to mostly being a problem only with FACE TRACKING (the rest of the Kinect data and functionality seem to work fine).

So … even though I have a Kinect, if I’m working on face tracking, I’m out of luck on my machine in terms of development. However, using the technique described in the previous post, I can play back a Kinect Studio file and test my software just find.

To that end, we have recorded a short segment of a couple of us in view, with and without faces engaged, happy, looking and not, … and posted it here for anyone to use in their Kinect face tracking software. This recording has all the feeds turned on, including RGB, so it’s a HUGE file. Feel free to download it (below) and use it for your Kinect face tracking development.

DOWNLOAD HERE: Kinect recorded file – 2 faces, all feeds. (LARGE: ~4.4GB zipped)

Kinect Recording - 2 faces

Kinect Recording – 2 faces

Hope that helps!

Sideloading Windows Store Apps – When Unlimited Has a Limit

In part 1 and part 2 of this series, I describe how & where to buy a Windows Sideloading Key and then how to configure a machine to sideloading your “store” application.

I did not think there was another part to this story … until I checked my license and it’s number of activations. The license you purchase and use to sideload Windows store applications is supposed to be for an “unlimited number of devices“.

Unlimited Devices

MS Claim of Activations on Unlimited Devices

You can imagine my surprise and frustration when I saw in the Volume License Service Center that I had burned through 7 of 25 activations in the first few days!!

Long story short, after a few emails with the VLSC, they said they set the number of activations on that “UNLIMITED” license to 25 “for Microsoft tracking purposes on how many times the product has been used“. In the event you run out, you can request more activations by contacting the MAK team.

I do NOT want to be in production and getting calls from a customer that can no longer sideload the application because we have reached the maximum number of activations. Sure enough, it took another couple emails, but the MAK team was “happy” to increase the number… to 225. Still not unlimited, but a somewhat large number that I will someday likely have to increase again.

225 Activations

225 Activations

Where I uncovered the answers

      vlserva -at- microsoft.com
      MAKAdd -at- microsoft.com