It’s that time again…..off to DC!

Halfway there – I’m traveling to Washington, D.C. to meet and educate elected officials and regulators on the booming tech industry. As a leader in the field of custom app development, InterKnowlogy – and so many of our clients – are greatly affected by the policies that our elected officials author and support. It’s an important reminder that our voices DO count, people listen, and we each have a responsibility to get involved.

As part of ACT | The App Association’s annual fly-in, I’m joining more than 50 small tech companies from across the country to advocate for an environment that encourages innovation and inspires growth.  Our message is simple. Small companies like ours are creating solutions that are improving lives, creating jobs, and invigorating our economy.  The creativity that comes from these discussions is amazing, and the energy inspires each of us to go out and craft our message in a way that is unique to each of our environments.

Policymakers in Washington must understand issues threatening small tech companies to ensure growth continues. The concerns we will raise next week include data privacy and security, internet governance, intellectual property and patent reform, mobile health regulation, and regulatory obstacles to growth. These are important issues for which the federal government is considering taking action.

I look forward to meeting with my elected officials and others in Washington to educate them about the tech industry so they can make the right decisions about our future. Hopefully, an informed Congress will help entrepreneurs continue to flourish.

Follow our activities at www.ACTonline.org, or via Twitter @EmilieHersh, and @ACTOnline – come join in the fun as we make a difference in our collective future!

 

 

 

 

 

IMSA MOBILE APPS – 3 – SCALING WITH AZURE

When taking your presence to mobile, there is always a scalability conversation that quickly occurs. This is especially true when the systems you need to access are on-premise. Your on-premise systems may have never been designed for the user load you would add with mobile apps. Additionally, your on-premise systems may not even be exposed to the internet, introducing a whole set of security complexities that need to be solved. In the case of IMSA, we are relying on services already exposed to the internet, so one less set of issues to manage.

Through our build experiences with Azure projects such as CNN, we knew several considerations would apply. The services referenced below are those supplied directly from IMSA:

  • How many users would concurrently be accessing the services through the mobile apps?
  • What is the latency for the service calls?
  • How much effort is it for the service to generate the data?
  • How often are the services taken down for maintenance? For how long?
  • Will the services change over time as backend systems change?

These are relatively simple questions, but they serve to shape the approach you take to scale. To provide the best possible mobile experience, we envisioned a brokering capability to be served by Azure. All mobile apps across iOS, Android, and Universal Apps would access this brokering layer for data access. This brokering layer is caching data from IMSA services for fast access.

There is immense flexibility in how you shape solutions in Azure for scale, particularly around caching. Ultimately the purpose of data caching is to minimize the number of trips to the backend services. There can be instances where the backend services are so expensive in time and resources to call that the architecture must do everything possible to minimize the user paying the price of waiting for that call to complete. In this case, Azure can be setup to actively keep its cache fresh and minimize the amount of calls to the backend services. Mobile apps would then always have a fast and fluid experience and never feel slow, and a company would not have to worry about putting a massive amount of resources for scaling up their backend services.

Fortunately, this was not the case for us and the IMSA backend services. The backend services are responsive and data is small per service call. Also, it is not expensive for the backend services to produce the data. Even in this case, there is benefit to leveraging Azure. IMSA race events are at key moments in time, and traffic heavily spikes around each event. It is not beneficial to have hardware laying around mostly idle 90%+ of the time waiting for the spike in usage. Additionally, the IMSA services could be taken down briefly for maintenance. Using Azure for brokering calls still has merit because capability can be scaled up and down around the IMSA events. There will be minimal additional load put on the backend services because Azure is doing most of the work of serving data to the mobile apps.

The approach we took for IMSA relied on a combination of HTTP output caching (via ETag) and Azure Redis Cache all within Azure Mobile Services. Basically, when a mobile app makes a request from an Azure service for the first time, no ETag is present because our services did not already generate it. However, we have the URL and parameters passed in, which forms a unique key to the requested data. Redis cache is checked to see if the data is present. If the data is present and is not expired, then the cached data from Redis is returned. If the data is not present or is expired in Redis, then Azure makes the request into the backend IMSA services, puts the response into the cache, and returns it to the calling mobile app. An ETag is generated with each response, so if the mobile app requests the same data again that ETag is supplied. This is informing our Azure services that the calling mobile app has data already, but is not sure if the data is still valid. The benefit of supplying the ETag is that we can check whether or not the ETag has expired, meaning the related data in cache has expired. If it has not expired, an HTTP 304 is returned which is much lighter weight response than if the cached data was returned.

There is a downside to this approach. When simultaneous requests are made for the exact same data (based on the URL and the parameters passed in) at the exact same moment, each request could do the full trip to the backend IMSA services. If IMSA had millions of users during each event, we would prevent this by doing data locking within Redis, but they do not so the extra engineering to prevent this is not warranted.

Through this technique, we have set ourselves up to be prepared for tens of thousands of new users at each event without bringing the IMSA services to their knees.

ACT | The App Association Welcomes InterKnowlogy CEO Hersh to Its Board

emilie_hershWe have exciting news! Just announced, our CEO Emilie Hersh is the newest addition to ACT | The App Association’s Board of Directors. Emilie’s experience spans more than twenty years with tech startups, consulting, and extensive team leadership across a number of industry verticals.

As InterKnowlogy’s Chief Executive Officer she has successfully grown the company to a highly profitable entity, more than doubling revenue while expanding internationally. The company has become a global leader developing interactive software for startups, Fortune 100 clients, global media outlets, and many of the best-recognized brands in the world.

“We are delighted to welcome Emilie Hersh to our Board of Directors,” said ACT | The App Association President Jonathan Zuck. “Her leadership has produced extraordinary achievements in innovation and she’s long been a valuable resource to our organization. Emilie will be an exemplary addition to our team.”

About ACT | The App Association:

ACT | The App Association represents more than 5,000 small and mid-size software companies in the mobile app economy. The organization advocates for an environment that inspires and rewards innovation while providing resources to help its members leverage their intellectual assets to raise capital, create jobs, and continue innovating.

SoCal Code Camp – Crawl, Walk, Talk – Windows Universal App Lifecycle and Cortana API

This year’s SoCal Code Camp at Cal State Fullerton was a blast! So many great speakers and attendees. It’s nice getting out again.

Huge thanks to the crew that came out today to my talk. It was a great having so many people there! Here is a link to my materials for my talk:

Crawl, Walk, Talk – Windows Universal App Lifecycle and Cortana API.

Hope to see everyone at the next SoCal Code Camp!

InterKnowlogy Employees Give Back To Community

nterknowlogy Employees Day of Giving

InterKnowlogy colleagues volunteered with Habitat for Humanity as part of the company’s Day of Giving event. InterKnowlogy is known for its incredible company culture and community spirit, with employees who are as passionate about creating innovative technology as they are about the place they live. Volunteers pitched in to work on a job site in Escondido, where future family homes were in different phases of construction. Over half the company’s employees participated in the event and spent the day installing roofing, hanging drywall and painting. The sense of teamwork was magnified when the employees joined together to make an impact in the community, creating awareness about the need to improve living conditions for low-income families.

“I have always been an advocate in giving back, and there are so many ways in which we can all do so – even better when we can do it together. The people at IK make it not only possible, but it is the reason we come to work and excel each day. It is truly my pleasure to work side-by-side with everyone here and give back to those in our community who need the most help!”

—Emilie Hersh, CEO of InterKnowlogy

A Day of Giving is organized each quarter by committee and gives employees the opportunity to find volunteer opportunities that can be done as a group. Seeing the great attitudes and desire to help improve the lives of those less fortunate than ourselves is truly rewarding.

Art Center College of Design Interaction Design Workshop

Art Center of Desing LogoInteraction Design (IxD) is used for much more than web sites and simple mobile apps. Companies are solving complex problems, and they can use IxD principles to make sure they are building the right thing the first time without writing novels of requirements documents. Rodney Guzman, Chief Architect and founder of InterKnowlogy will lead an interaction design workshop at the Art Center College of Design. Students will explore how InterKnowlogy works with customers to understand the features they need, and how we echo back what is heard. Students will have the opportunity to listen and observe a real interview with a product stakeholder. Afterwards, the students will work as team to write a proposal for the work. Our goal is to give the students a real life scenario of getting from an idea to effectively communicating a solution.

IMSA Mobile Apps – 2 – Planning For Maximizing Code Re-Use Across iOS, Android, and Universal Apps

While we were busy thinking through the interaction design elements of new IMSA mobile apps, we knew we were going to have to build six apps (iOS, Android, Universal Apps for phone and tablet). The app architecture we chose to follow for this is Model-View-ViewModel (or MVVM). Our mission was to maximize the amount of code sharing across all implementations of the app. The more code sharing we could do, the less code we would have to develop for each platform. Less code means less time to develop and less to test, making the aggressive schedule more achievable.

The Model layer contains the business logic and data that will drive the IMSA app. Data would be served through a scalable cloud infrastructure being constructed for the mobile apps. Regardless of mobile OS, the business logic and data will remain the same. How we access the data in the cloud and retrieve would also remain the same. These layers are devoid of any user interface elements are a logical candidate for re-use across all the mobile operating systems. Perfect – one layer to write one. But we want more.

We were suspicious that the View layer would be so unique across mobile operating systems that the ViewModel layer would not be re-usable. The ViewModel layer is responsible for binding the Model layer (the business logic) to a View (the user interface). Remember we are talking about code sharing across iOS, Android, and Universal Apps – this have to be do so different that writing a consistent and shareable ViewModel layer would not be possible – right? Wrong! After some initial prototyping we were pleasantly surprised. The path we have chosen is going to allow us to use the same code in the ViewModel layer across all operating systems.

From our early calculations, thanks to Visual Studio and Xamarin, we are predicting to see about 75% code re-use (of the non-generated code) across all the implementations! Excellent news for the developers and project manager. We’ll dive into code examples in an upcoming blog, but next we’ll discuss our approach with Azure.  Also, this video has additional information for code re-use with Xamarin.

IMSA Mobile Apps – 1 – Architecture Design Session

The IMSA Mobile Apps project is currently in flight and we are actively working on building this cross platform/cross OS solution. This article is the first in a series of blogs discussing the details of the project, and we’ll be actively trying to catch up to the current day as we are busy building towards the Laguna Seca race.

Back in the first week of December 2014 we flew out to Florida to visit the IMSA team with Microsoft in Daytona Beach. Microsoft was hosting an Architecture Design Session, or ADS for short, to flesh out features of the solution. It became quickly apparent that the solution was layered and complex. Many features discussed have become part of longer product roadmap as IMSA is committed to providing the best experience possible to their fans. Also, it should be noted as in all ideation sessions, some ideas discussed are put deep down in the feature backlog.

I am certain that some would ask why IMSA involved Microsoft. This is a mobile app – what does Microsoft know about building mobile apps across iOS and Android? Well, it turns out quite a lot. From past projects, we already knew the tooling we get with Visual Studio and Xamarin allows us to build amazing mobile apps across all platforms and OS’s. The other side of the coin is the plumbing we get to integrate into cloud infrastructure. This app needed to scale across the huge IMSA fan base during live events. From past projects we knew how effective we could be building scalable mobile apps with Azure. So to IMSA and to us, involving Microsoft made perfect sense.

In the ADS, some of the interesting features starting popping up:

The app would need to change shape depending on whether or not a race is live or not. We thought treating the app almost like the NFL Now app would be interesting. There could be something interesting always to watch on our app, regardless if an event is live or not.

IMSA radio is a live audio stream. The app would need to deliver this feed just like other integrated audio content on your device. So turning on IMSA radio, putting your headphones on, and then your device in your pocket should be as natural as playing music.

Using the device’s GPS, if the race fan is at the event the app should respond differently than if the person were elsewhere. When you are at an event, what you are interested in is different than when you are not.

Telemetry information from the cars. It would be just awesome to watch your favorite car at the event or at home and see all the g-forces they are pulling when they are flying around the corners.

IMSA services to content and structured information are not scalability to a mobile play. A cloud infrastructure would need to be placed in front of the IMSA services so content could be cached and served more quickly.

 

After the ADS we went home and decomposed all the features while looking at the schedule. We needed to pick a race event to target deployment. We had a lot of homework to determine our approach. In this next blog we will be discussing how we were planning on maximizing code re-use across all platforms and OS’s.

Kinect 2.0 Face Frame Data with Other Frame Data

Do you have a project that you are using the Kinect 2.0 Face detection as well as one, or more, of the other feeds from the Kinect? Well I am, and I was having issues with obtaining all the Frames I wanted from the Kinect. Let’s start with a brief, highlevel, overview, I had a need to obtain the data relating to the Color Image, the Body Tracking, and the Face Tracking. Seems very straight forward until I realized that the Face Data was not included in the MultiSourceFrameReader class. That reader only provided me the Color and Body frame data. In order to get the Face data I needed to use a FaceFrameReader. Which required me to listen for the arrival of two frame events.

For example, I was doing something like this.

public MainWindowViewModel()
{
	_kinectSensor = KinectSensor.GetDefault();

	const FrameSourceTypes frameTypes = FrameSourceTypes.Color | FrameSourceTypes.Body;
	MultiSourceFrameReader kinectFrameReader = _kinectSensor.OpenMultiSourceFrameReader( frameTypes );

	kinectFrameReader.MultiSourceFrameArrived += OnMultiSourceFrameArrived;

	const FaceFrameFeatures faceFrameFeatures = 
		  FaceFrameFeatures.BoundingBoxInColorSpace
		| FaceFrameFeatures.RotationOrientation
		| FaceFrameFeatures.FaceEngagement;

	faceFrameSource = new FaceFrameSource( _kinectSensor, 0, faceFrameFeatures );
	faceFrameReader = faceFrameSources.OpenReader();
	faceFrameReader.FrameArrived += OnFaceFrameArrived;

	_kinectSensor.IsAvailableChanged += OnSensorAvailableChanged;
	_kinectSensor.Open();

}

private void OnMultiSourceFrameArrived( object sender, MultiSourceFrameArrivedEventArgs e )
{
	//Process Color and Body Frames
}

private void OnFaceFrameArrived( object sender, FaceFrameArrivedEventArgs e )
{
	//Process Face frame
}

In theory, this should not be a problem because the Kinect is firing off all frames at around 30 per second and, I assume, in unison. However, I was running into the issue that if my processing of the Face data took longer then the 30th of a second I had to process, the color or body data could be at a different point in the cycle. What I was seeing was images that appeared to be striped between two frames. Now I understand that this behavior could be linked of various issues that I am not going to dive into in this post. But what I had noticed was the more processing I tried to pack in to the Face Frame arrival handler, the more frequently I saw bad images. It is worth noting that my actual project will process all six of the faces that the Kinect can track, and when having to iterate through more then one face per frame, the bad, striped, images were occurring more then good images. This lead me to my conclusion (and my solution lead me to write this blog.)

I also did not like the above approach because it forced me to process frames in different places, and possibly on different cycles. So when something wasn’t working I had to determine which Frame was the offender, then go to that processing method. No bueno.

In trouble shooting the poor images, I had the thought “I just want the color and frame that the Face frame using.” Confused? I’ll try to explain. Basically, the Kinect Face tracking is using some conglomeration of the basic Kinect Feeds (Color, Depth, Body) to figure out what is a face, and the features of that face. I know this because if a body is not being tracked, a face is not being tracked. The depth is then use to track if the eyes are open or closed and other intricacies of the face. Anyways, back on track, I had a feeling that the Kinect Face Frame had, at least, some link back to the other frames that were used to determine the state of the face for that 30th of a second. That is when I stumbled upon FaceFrame.BodyFrameReference and FaceFrame.ColorFrameReference (FaceFrame.DepthFrameReference also exsits, it’s just not needed for my purposes). From those references you can get the respective frames.

After my epiphany my code turned into:

public MainWindowViewModel()
{
	_kinectSensor = KinectSensor.GetDefault();

	//Multiframe reader stuff was here.
	//It is now gone.

	const FaceFrameFeatures faceFrameFeatures = 
		  FaceFrameFeatures.BoundingBoxInColorSpace
		| FaceFrameFeatures.RotationOrientation
		| FaceFrameFeatures.FaceEngagement;

	faceFrameSource = new FaceFrameSource( _kinectSensor, 0, faceFrameFeatures );
	faceFrameReader = faceFrameSources.OpenReader();
	faceFrameReader.FrameArrived += OnFaceFrameArrived;

	_kinectSensor.IsAvailableChanged += OnSensorAvailableChanged;
	_kinectSensor.Open();

}

private void OnFaceFrameArrived( object sender, FaceFrameArrivedEventArgs e )
{
	...
	var bodyReference = faceFrame.BodyFrameReference;
	BodyFrame bodyFrame = bodyReference.AcquireFrame();

	var colorReference = faceFrame.ColorFrameReference;
	ColorFrame colorFrame = colorReference.AcquireFrame();
	//Process Face frame
	...
}

private void ProcessBodyAndColorFrames( BodyFrame bodyFrame, ColorFrame colorFrame )
{
	//Process Body and Color Frames
	...
}

I still have the processing methods split out similarly to how I had them in the first (bad) way, but with this approach I am a little more confident that the Color and Body Frames and I analyzing are the same ones used by the Face processor. I am also more free to split up the processing as I see fit. The color and body are really only lumped together because that is how I was doing it before. In the future they might be split up and done in parallel, who knows.

And with that, the occurrence of bad images appears to be greatly reduced. At least for now. We will see how long it lasts. I still get some bad frames, but I am at least closer to being able to completely blame the Kinect, or poor USB performance, or something other than me (which is my ultimate goal.)

Not a big thing, just something I could not readily find on the web.

Kinect Development (Face tracking) – Without a Kinect

In a previous post I talked about how you can use Kinect Studio v2 Studio software to “play back” a recorded file that contains Kinect data. Your application will react to the incoming data as if it were coming from a Kinect, enabling you to develop software for a Kinect without actually having the device.

This of course requires that you have a recorded file to playback. Keep reading…

More specifically, Kinect for Windows v2 supports the ability to track not only bodies detected in the camera view, but tracking FACES. Even better, there are a number of properties on the detected face metadata that tell you if the person is:

  • looking away from the camera
  • happy
  • mouth moving
  • wearing glasses
  • …etc…

Here at IK, we have been doing a lot of Kinect work lately. It turns out the Kinect v2 device and driver are super picky when it comes to compatible USB 3 controllers. We have discovered that our laptops (Dell Precision m4800) do not have one of the approved controllers. Through lots of development trial and error, we have narrowed this down to mostly being a problem only with FACE TRACKING (the rest of the Kinect data and functionality seem to work fine).

So … even though I have a Kinect, if I’m working on face tracking, I’m out of luck on my machine in terms of development. However, using the technique described in the previous post, I can play back a Kinect Studio file and test my software just find.

To that end, we have recorded a short segment of a couple of us in view, with and without faces engaged, happy, looking and not, … and posted it here for anyone to use in their Kinect face tracking software. This recording has all the feeds turned on, including RGB, so it’s a HUGE file. Feel free to download it (below) and use it for your Kinect face tracking development.

DOWNLOAD HERE: Kinect recorded file – 2 faces, all feeds. (LARGE: ~4.4GB zipped)

Kinect Recording - 2 faces

Kinect Recording – 2 faces

Hope that helps!