Dotneteers.net
All for .net, .net for all!

Kinect Interactions with(out) WPF – Part III: Demystifying the Interaction Stream

VBandi's blog

Syndication

Subscribe

Generic Content

image In this part of my Kinect Interaction blog post series, we go deep into the rabbit hole, and examine the foundation of Kinect Interactions – the InteractionStream, upon which the entire library is built. This is a risky ride – with no official documentation, we can only count on our trusty reflector, the source code of the Kinect Interaction SDK and careful exploration.

You only need to access the treasures of InteractionStream, if you want to go beyond what the KinectRegion and other controls provide. For example, you want to create your own KinectRegion, you want to zoom a map by gripping it with two hands, or want to build your entirely new interaction model, using two hands along with the press and grip gestures.

Initializing the InteractionStream

Initializing the InteractionStream is much like initializing the DepthStream or the SkeletonStream. If you have a KinectSensor object, all you need are the next two lines of code:

   1:  _interactionStream = new InteractionStream(_sensor, new DummyInteractionClient());
   2:  _interactionStream.InteractionFrameReady += InteractionStreamOnInteractionFrameReady;

What is the DummyInteractionClient that we pass in the constructor? Turns out, that the InteractionStream needs an object that implements the IInteractionClient interface to work. The interface defines a single method:

   1:  public interface IInteractionClient
   2:  {
   3:    InteractionInfo GetInteractionInfoAtLocation(
   4:        int skeletonTrackingId, 
   5:        InteractionHandType handType, 
   6:        double x, 
   7:        double y);
   8:  }

X and Y are obviously coordinates, skeletonTrackingID is the identifier of the user. The InteractionHandType type is a simple enum with the values of “None”, “Left”, “Right”, identifying the hand of the user. The InteractionInfo class has the following members:

   1:  public sealed class InteractionInfo
   2:  {
   3:    public bool IsPressTarget { get; set; }
   4:    public int PressTargetControlId { get; set; }
   5:    public double PressAttractionPointX { get; set; }
   6:    public double PressAttractionPointY { get; set; }
   7:    public bool IsGripTarget { get; set; }
   8:  }

At this point, my Spider-sense tingles, and I think I have guessed the purpose of the IInteractionClient interface: it’s GetInteractionInfoAtLocation method is called to determine that when the hand cursor of a certain user’s certain hand is at a certain position, is that position pressable and / or grippable. So, I created a dummy implementation of the IInteractionClient interface that keeps saying YES to all these questions.

   1:  public class DummyInteractionClient : IInteractionClient
   2:  {
   3:      public InteractionInfo GetInteractionInfoAtLocation(
   4:          int skeletonTrackingId, 
   5:          InteractionHandType handType, 
   6:          double x, 
   7:          double y)
   8:      {
   9:          var result = new InteractionInfo();
  10:          result.IsGripTarget = true;
  11:          result.IsPressTarget = true;
  12:          result.PressAttractionPointX = 0.5;
  13:          result.PressAttractionPointY = 0.5;
  14:          result.PressTargetControlId = 1;
  15:   
  16:          return result;
  17:      }
  18:  }

It seems like we are on track. Just pass an initialized KinectSensor object and this DummyInteractionClient to the constructort of the InteractionStream, and we should be all set, right? Well, not quite. The InteractionFrameReady event does not fire.

 

Interaction Needs Skeleton and Depth

It turns out, that for the InteractionStream to work, it needs to process the data from both the depth and the skeleton streams. So, we need to initiate all three of the streams. This is what the entire OnLoaded method (which you have to wire up either in XAML or in the constructor of the page) looks like:

   1:  private KinectSensor _sensor;  //The Kinect Sensor the application will use
   2:  private InteractionStream _interactionStream;
   3:   
   4:  private Skeleton[] _skeletons; //the skeletons 
   5:  private UserInfo[] _userInfos; //the information about the interactive users
   6:   
   7:  private void OnLoaded(object sender, RoutedEventArgs routedEventArgs)
   8:  {
   9:      if (DesignerProperties.GetIsInDesignMode(this))
  10:          return;
  11:   
  12:      // this is just a test, so it only works with one Kinect, and quits if that is not available.
  13:      _sensor = KinectSensor.KinectSensors.FirstOrDefault();
  14:      if (_sensor == null)
  15:      {
  16:          MessageBox.Show("No Kinect Sensor detected!");
  17:          Close();
  18:          return;
  19:      }
  20:   
  21:      _skeletons = new Skeleton[_sensor.SkeletonStream.FrameSkeletonArrayLength];
  22:      _userInfos = new UserInfo[InteractionFrame.UserInfoArrayLength];
  23:   
  24:   
  25:      _sensor.DepthStream.Range = DepthRange.Near;
  26:      _sensor.DepthStream.Enable(DepthImageFormat.Resolution640x480Fps30);
  27:   
  28:      _sensor.SkeletonStream.TrackingMode = SkeletonTrackingMode.Seated;
  29:      _sensor.SkeletonStream.EnableTrackingInNearRange = true;
  30:      _sensor.SkeletonStream.Enable();
  31:   
  32:      _interactionStream = new InteractionStream(_sensor, new DummyInteractionClient());
  33:      _interactionStream.InteractionFrameReady += InteractionStreamOnInteractionFrameReady;
  34:   
  35:      _sensor.DepthFrameReady += SensorOnDepthFrameReady;
  36:      _sensor.SkeletonFrameReady += SensorOnSkeletonFrameReady;
  37:   
  38:      _sensor.Start();
  39:  }
 

It doesn’t seem so simple now, so let's walk through each line of code and see what they do. The first five lines define local variables that hold a reference to the Kinect sensor the application will use, the InteractionStream itself, the skeletons identified by the SkeletonStream, and the hand position information about the users as determined by the InteractionStream.

Line 9 makes sure that we don't do anything when running in the designer. Lines 12 through 19 initialize the first available KinectSensor. Note, that for simplicity's sake, we are not using the KinectSensorChooser introduced in the first part of the series.

After that, the _skeletons array is initialized to the maximum amount of skeletons the SkeletonStream can handle. Similarly, the _userInfos array (which will store the information we get from the InteractionStream) is initialized to the maximum number of users the InteractionStream can work with. Both of these are 6 at the moment, but it is safer to use this initialization method to be future-proof. The rest is fairly standard initialization for the Kinect SDK. Note that we are using than the near mode of the Kinect for Windows sensor, which may not be available if you are using the Kinect for Xbox sensor. In this case, you may want turn these off in lines 25 and 29.

The event handlers for the SkeletonFrameReady and DepthFrameReady events are fairly simple, but there’s a little bit of error handling that makes them longer. I emphasized the key parts below:

   1:  private void SensorOnSkeletonFrameReady(object sender, SkeletonFrameReadyEventArgs skeletonFrameReadyEventArgs)
   2:  {
   3:      using (SkeletonFrame skeletonFrame = skeletonFrameReadyEventArgs.OpenSkeletonFrame())
   4:      {
   5:          if (skeletonFrame == null)
   6:              return;
   7:   
   8:          try
   9:          {
  10:              skeletonFrame.CopySkeletonDataTo(_skeletons);
  11:              var accelerometerReading = _sensor.AccelerometerGetCurrentReading();
  12:              _interactionStream.ProcessSkeleton(_skeletons, accelerometerReading, skeletonFrame.Timestamp);
  13:          }
  14:          catch (InvalidOperationException)
  15:          {
  16:              // SkeletonFrame functions may throw when the sensor gets
  17:              // into a bad state.  Ignore the frame in that case.
  18:          }
  19:      }
  20:  }

Essentially, we acquire a SkeletonFrame, and pass it to the ProcessSkeleton method of the InteractionStream, along with the accelerometer reading coming from the Kinect sensor and the timestamp.

The DepthFrameReady event handler is pretty similar:

   1:  private void SensorOnDepthFrameReady(object sender, DepthImageFrameReadyEventArgs depthImageFrameReadyEventArgs)
   2:  {
   3:      using (DepthImageFrame depthFrame = depthImageFrameReadyEventArgs.OpenDepthImageFrame())
   4:      {
   5:          if (depthFrame == null)
   6:              return;
   7:   
   8:          try
   9:          {
  10:              _interactionStream.ProcessDepth(depthFrame.GetRawPixelData(), depthFrame.Timestamp);
  11:          }
  12:          catch (InvalidOperationException)
  13:          {
  14:              // DepthFrame functions may throw when the sensor gets
  15:              // into a bad state.  Ignore the frame in that case.
  16:          }
  17:      }
  18:  }

We acquire a DepthFrame, get its raw pixel data, and pass it over for processing to the InteractionStream, along with a timestamp. And finally, the InteractionFrameReady event gets called. Yay!

Dissecting InteractionFrameReadyEventArgs

So, now that we have finally made the InteractionStream call the InteractionFrameReady event, we can start analyzing what kind of data we can get to. The first step is to get our hand on an InteractionFrame, and the interaction data it contains:

   1:  using (var iaf = args.OpenInteractionFrame()) //dispose as soon as possible
   2:  {
   3:      if (iaf == null)
   4:          return;
   5:   
   6:      iaf.CopyInteractionDataTo(_userInfos);
   7:  }

Remember – the _userInfos array is an array of UserInfo objects. A UserInfo object contains all the interaction data related to that specific user. More specifically it has two public properties:

  • int SkeletonTrackingId, which is the id of the user corresponding to the user ID in the SkeletonStream. If the skeletonTrackingID property equals 0, its means that the UserInfo object does not contain a valid user.
  • ReadOnlyCollection<InteractionHandPointer> HandPointers, which contains the hands of the specified user (this collection probably has no more than two items most of the time).

Let's dig deeper! The next class to examine is the InteractionHandPointer class. This class represents one hand of the user. I have used a small WPF application along with the Kinect Studio to understand the properties, and displayed all properties of the InteractionHandPointer class in a nifty little window. Here is the output from my Kinect, with me sitting in front of my computer:

image

On the right is the depth stream, and the super-cool 3D Viewer from Kinect Studio. I have both of my hands in the air, with the left hand closed, and the right one open. The keen eyed ones among you may realize that the two pictures seem to be not just from a different point of view, but also mirrored – it looks like my right hand is closed instead of my left. It is just how the display is.

On the left side of the screenshot, you can see the output of my little program, showing the properties of the InteractionHandPointer class (and a little more). I will discuss all of these in detail below.

InteractionHandType HandType

Indicates whether the hand that belongs to the InteractionHandPointer is the left hand (Left), right hand (Right), or neither (None). Which sounds a bit funny, but the enumeration still has this option.

bool IsActive

Indicates whether this hand is active. A hand is considered active if it is raised and is in front of the user.

InteractionHandEventType HandEventType

This is another enumeration, showing the grip event that has happened to the hand in this frame. Which means that if you have performed a Grip gesture (closed your hand), the value of HandEventType will only be Grip or GripRelease for one frame! It would be very useful to have a boolean property for this event, since now we have to track the last event that is not None to know whether a hand is open or closed.

InteractionHandEventType LastHandEventType

This is unfortunately not part of the InteractionHandPointer class, but solves the abovementioned problem of storing the last grip event for each hand. You can find a sample code for tracking the last HandEventType for every user and both hands in the attached solution.

A bit of good news though: it seems like when the Kinect detects a user, it correctly sets the HandEventType property in the first frame. This means that even if the user comes into the play area with his fist closed, you will know and you will be able to react accordingly.

bool IsPrimaryForUser

Every user has one primary hand when it comes to Kinect Interactions, although you can ignore this when using the InteractionStream directly. The primary hand is the one which the user has raised first. If the user raises his / her other hand, the second hand is not considered active as long as the first hand is still raised. If the user lowers his first hand, then the second hand becomes primary (assuming it is still raised). In the picture above you can see that I have lifted my right hand first.

bool IsInteractive

To be honest, I have no idea what this property indicates. It seems like you can have both hands interactive at the same time, but it is possible to have a hand active, but not interactive. If you know what this property is about, please let me know in the comments, and I will update the post.

float PressExtent and bool IsPressed

Apart from the grip gesture, another gesture the InteractionStream can help you with is the “Press” gesture. You can tell that a lot of consideration and thought went into the implementation of this seemingly simple gesture. You can press towards the screen, you can press forward in front of you, and it will still detect the pressing of the button fairly well. You can start from an arm close to your body or one which is almost entirely extended. You can perform the gesture quickly or with moderate speed. However, if you perform it slowly, the gesture will not be recognized, since it amounts to just pushing the PhIZ (Physical Interaction Zone – the area where your hand is considered as a pointer and mapped to the screen) a bit further from your body.

PressExtent is proportional to how far along the press gesture the hand is. It is used to fill the hand cursor more and more to indicate that you are getting close to touching the button. When PressExtent is 1 or higher, IsPressed becomes true.

Note: I wish that the grip gesture would also have an IsGripping or IsHandClosed boolean for consistency’s sake. Or events, indicating the change of these states. Maybe in an upcoming SDK…

bool IsTracked

The name of the property indicates that this property shows whether the Kinect sensor can directly see the hand. The Skeleton engine has this information, but I have not been able to hide my hand so much that the IsTracked turned to false.

float RawX, RawY, RawZ, X, Y

The raw positions of the hand, relative to the Kinect. For the RawZ coordinate, the 0 point is a little bit in front of your body, and the numbers increase as you extend your hand towards the sensor. The X and Y coordinates have the same values as the RawX and RawY coordinates. These coordinates seem to cover the PhIZ in front of the user. For X, the numbers increase to the user’s right, and for Y the numbers increase downwards. The origin moves with the user. The PhIZ seems to be between the raw coordinates (0,0,0) and (1,1,1). I am saying “seems to be”, since this is a bit gray area, where some documentation would be very useful.

Show Me the Codez!

Finally, here is the entire code of the InteractionFrameReady event handler. Of course, the entire solution is also downloadable.

   1:  private Dictionary<int, InteractionHandEventType> _lastLeftHandEvents = new Dictionary<int, InteractionHandEventType>();
   2:  private Dictionary<int, InteractionHandEventType> _lastRightHandEvents = new Dictionary<int, InteractionHandEventType>();
   3:   
   4:  private void InteractionStreamOnInteractionFrameReady(object sender, InteractionFrameReadyEventArgs args)
   5:  {
   6:      using (var iaf = args.OpenInteractionFrame()) //dispose as soon as possible
   7:      {
   8:          if (iaf == null)
   9:              return;
  10:   
  11:          iaf.CopyInteractionDataTo(_userInfos);
  12:      }
  13:   
  14:      StringBuilder dump = new StringBuilder();
  15:   
  16:      var hasUser = false;
  17:      foreach (var userInfo in _userInfos)
  18:      {
  19:          var userID = userInfo.SkeletonTrackingId;
  20:          if (userID == 0)
  21:              continue;
  22:   
  23:          hasUser = true;
  24:          dump.AppendLine("User ID = " + userID);
  25:          dump.AppendLine("  Hands: ");
  26:          var hands = userInfo.HandPointers;
  27:          if (hands.Count == 0)
  28:              dump.AppendLine("    No hands");
  29:          else
  30:          {
  31:              foreach (var hand in hands)
  32:              {
  33:                  var lastHandEvents = hand.HandType == InteractionHandType.Left
  34:                                              ? _lastLeftHandEvents
  35:                                              : _lastRightHandEvents;
  36:   
  37:                  if (hand.HandEventType != InteractionHandEventType.None)
  38:                      lastHandEvents[userID] = hand.HandEventType;
  39:   
  40:                  var lastHandEvent = lastHandEvents.ContainsKey(userID)
  41:                                          ? lastHandEvents[userID]
  42:                                          : InteractionHandEventType.None;
  43:   
  44:                  dump.AppendLine();
  45:                  dump.AppendLine("    HandType: " + hand.HandType);
  46:                  dump.AppendLine("    HandEventType: " + hand.HandEventType);
  47:                  dump.AppendLine("    LastHandEventType: " + lastHandEvent);
  48:                  dump.AppendLine("    IsActive: " + hand.IsActive);
  49:                  dump.AppendLine("    IsPrimaryForUser: " + hand.IsPrimaryForUser);
  50:                  dump.AppendLine("    IsInteractive: " + hand.IsInteractive);
  51:                  dump.AppendLine("    PressExtent: " + hand.PressExtent.ToString("N3"));
  52:                  dump.AppendLine("    IsPressed: " + hand.IsPressed);
  53:                  dump.AppendLine("    IsTracked: " + hand.IsTracked);
  54:                  dump.AppendLine("    X: " + hand.X.ToString("N3"));
  55:                  dump.AppendLine("    Y: " + hand.Y.ToString("N3"));
  56:                  dump.AppendLine("    RawX: " + hand.RawX.ToString("N3"));
  57:                  dump.AppendLine("    RawY: " + hand.RawY.ToString("N3"));
  58:                  dump.AppendLine("    RawZ: " + hand.RawZ.ToString("N3"));
  59:              }
  60:          }
  61:   
  62:          tb.Text = dump.ToString();
  63:      }
  64:   
  65:      if (!hasUser)
  66:          tb.Text = "No user detected.";
  67:  }

And the MainWindow is simply a TextBox inside a ViewBox so that the text size follows the window’s size:

   1:  <Window x:Class="InteractionStreamTest.MainWindow"
   2:          xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
   3:          xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
   4:          Title="MainWindow" Height="350" Width="525">
   5:      <Grid>
   6:          <Viewbox>
   7:              <TextBlock Name="tb" FontFamily="Lucida Console" Text="Initializing..."/>
   8:          </Viewbox>
   9:      </Grid>
  10:  </Window>

Summary

I’ve shown you how you can get started with the InteractionStream and acquire information about the hands of the users, detect active hands, pressing (IsPressed and PressExtent), and open / closed hands (HandEventType and our custom LastHandEventType). What I’ve shown here does not require WPF – you can use this from any .NET application, be it XNA, Windows Forms or even Command Line.

Please share your thoughts, feedback, etc. in the comments!


Posted May 03 2013, 09:43 PM by vbandi

Comments

Valentin H wrote re: Kinect Interactions with(out) WPF – Part III: Demystifying the Interaction Stream
on Sat, May 4 2013 16:52

Really useful to help me as the official documentation is very limited.

Thank you really much!

Dew Drop – May 6, 2013 (#1,541) | Alvin Ashcraft's Morning Dew wrote Dew Drop &ndash; May 6, 2013 (#1,541) | Alvin Ashcraft&#039;s Morning Dew
on Mon, May 6 2013 15:04

Pingback from  Dew Drop – May 6, 2013 (#1,541) | Alvin Ashcraft's Morning Dew

yeadude wrote re: Kinect Interactions with(out) WPF – Part III: Demystifying the Interaction Stream
on Tue, May 7 2013 16:21

Loving this series!

Do you think you could make a quick example on C++?

I'm kinda stuck on some new functions like InteractionStream.InteractionFrameReady (I don't know which is the equivalent on C++)

vbandi wrote re: Kinect Interactions with(out) WPF – Part III: Demystifying the Interaction Stream
on Tue, May 7 2013 20:34

I am very glad that you love these blog posts... Unfortunately, I am not knowledgeable enough in C++ to help you... it's been ages since I used C++ for anything serious. Sorry :(

Geo wrote re: Kinect Interactions with(out) WPF – Part III: Demystifying the Interaction Stream
on Wed, May 8 2013 14:24

First of all, thanks for this series, it is very helpful for beginners in Kinect like myself.

What I'd like to ask is whether you are going to post a tutorial on using two cursors (one for each hand) on a basic app, doing press to push, scrolling, perhaps zooming in/out. That would be a really big help.

Keep it up and thanks again.

vbandi wrote re: Kinect Interactions with(out) WPF – Part III: Demystifying the Interaction Stream
on Wed, May 8 2013 14:42

Hi Geo,

I am glad you like the posts!

Press to Push and Scrolling has been discussed in the first part of the series, using the standard controls. I will see what I can do for the rest of your requests - for example, zooming is something I would like to explore as well.

András

Crash wrote re: Kinect Interactions with(out) WPF – Part III: Demystifying the Interaction Stream
on Wed, May 8 2013 14:43

If I recall correctly, the member "isInteractive" indicates, if either at least one or the primary hand of the active user is inside the KinectRegion.

If the hand is not Interactive and "isInteractive" is false, the displayed cursor's opacity is below 1 (0.7 or something like that).

BTW, great Blog here, helped me a lot with the new SDK so far.

Geo wrote re: Kinect Interactions with(out) WPF – Part III: Demystifying the Interaction Stream
on Wed, May 8 2013 16:21

@vbandi I have followed the first part of the series. I was wondering how to use two cursors inside the Kinect Region. Left and right hand. If you have any tip I would find it very useful.

Thanks for the help.

vbandi wrote re: Kinect Interactions with(out) WPF – Part III: Demystifying the Interaction Stream
on Wed, May 8 2013 16:32

Well, KinectRegion itself uses a single-handed interaction model. Luckily, it's source code is available within the SDK, so you may be able to dig into it and modify it to handle two hands. I haven't done this exercise myself yet, but the InteractionStream discussed above and the KinectRegion source code should be a good starting point.

Jeong wrote re: Kinect Interactions with(out) WPF – Part III: Demystifying the Interaction Stream
on Thu, May 9 2013 3:04

I am developing KINECT application. I have some question to you. I want to customize cursor in kinectregion. How can I do this? I opened KinectCursorVisualizer.cs in Microsoft.Kinect.Toolkit.Controls.csproj. But I can not find source. Could you give me hint?

vbandi wrote re: Kinect Interactions with(out) WPF – Part III: Demystifying the Interaction Stream
on Thu, May 9 2013 9:18

Jeong: I will try to do a post on this, but it may take me a few weeks to get to it. The hint is: the SDK has the Kinect.Toolkit.Controls source code that you can install. From there, you can open the Themes/Generic.xaml in Blend, and customize the default template of the KinectCursor control.

Rodolfo wrote re: Kinect Interactions with(out) WPF – Part III: Demystifying the Interaction Stream
on Fri, May 10 2013 18:59

In the new kinect 1.7 documentation there is a small picture and the definition of what is the "interaction" zone.

msdn.microsoft.com/.../dn188674.aspx

From the documentation:

Tracked: The hand is being tracked by the Kinect sensor, but is down.

Active: The user has their hand up, but outside of the interactive volume.

Interactive: The user is comfortably interacting with the screen.

Grip: The KinectCursor is associated with a given KinectInteraction Control and the hand is recognized as gripping the control.

Press: The KinectCursor is associated with a given KinectInteraction Control and the hand is recognized as pressing the control.

Pozycjonowanie wrote Pozycjonowanie
on Fri, May 17 2013 3:11

Pingback from  Pozycjonowanie

Alain wrote re: Kinect Interactions with(out) WPF – Part III: Demystifying the Interaction Stream
on Mon, May 20 2013 8:04

Thank you :) for this post. I actually haven't read it yet , but I think you're the first one actually posting something about it :).

Thanks and thanks for answering my previous question.

Anna wrote re: Kinect Interactions with(out) WPF – Part III: Demystifying the Interaction Stream
on Thu, May 23 2013 10:52

Hi vbandi,

I'm actually started to follow your kinect from the first version till this one.

But, i have some question wanted ask, how different is KinectSensor and KinectSensorChooser.

In my case, I wanted to fire some event by tracking the user is there or not.

From the sample that i got from CodePlex, by tracking user need to use KinectSensor together with InteractionStream, but then at the same time i need to use KinectSensorChooser for WPF controls.

alex wrote re: Kinect Interactions with(out) WPF – Part III: Demystifying the Interaction Stream
on Sat, Jun 1 2013 9:33

Excellent Post!  I wonder if anyone has tried Kinect Toolbox (kinecttoolbox.codeplex.com).  While I couldn't get it working, there are codes for gesture detection that could be inspiration for own development

vbandi wrote re: Kinect Interactions with(out) WPF – Part III: Demystifying the Interaction Stream
on Tue, Jun 4 2013 17:40

Anna, the code above has a variable called "hasUser". That should allow you to detect whether there is a user or not. Another way to do it is to use the SkeletonStream instead of the InteractionStream, and go from there.

Leonardo wrote re: Kinect Interactions with(out) WPF – Part III: Demystifying the Interaction Stream
on Mon, Jun 10 2013 12:54

First, congratulations on the number of posts. Develop a simple application using the Kinect SDK 1.7, however, when you used the application with the Kinect, and not with the mouse, the processor consumption increases, and the application starts to crash. Do you know why this happens? Thank you.

vbandi wrote re: Kinect Interactions with(out) WPF – Part III: Demystifying the Interaction Stream
on Mon, Jun 10 2013 12:57

Hi Leonardo,

Sounds like a bug in either your code or the Kinect SDK. If you can send a small (minimal) repro project to the Kinect for Windows team, I am sure they will be happy to look into it.

Thanks for liking the posts!

alok wrote re: Kinect Interactions with(out) WPF – Part III: Demystifying the Interaction Stream
on Tue, Jun 11 2013 10:24

hi vnadi

m having a problem in the _sensor.start(); of your code...line 38.

gives some hardware not supported error

can you help me in this.

m nuilding your example on .net 4 vs 2010.

Jared wrote re: Kinect Interactions with(out) WPF – Part III: Demystifying the Interaction Stream
on Mon, Jun 17 2013 0:53

vbandi,

Awesome tutorial, Thank you! After banging my head on the wall for the last week, I think I'm starting to understand the interaction stream. Thanks!

alok,

I had a similar error. If you're using an xbox kinect, make sure you deactivate near-mode.

Nithin wrote re: Kinect Interactions with(out) WPF – Part III: Demystifying the Interaction Stream
on Mon, Jun 17 2013 6:12

Hi Vbandi,

 Thanks for this awesome tutorial....Can You Please help us with Handling two hands inside kinect region and customizing hand cursor size...I tried opening Generics.Xaml in Blend but its not getting opened as per you suggestion..

Nithin Rajan wrote re: Kinect Interactions with(out) WPF – Part III: Demystifying the Interaction Stream
on Mon, Jun 17 2013 6:16

Hi Vbandi,

Thanks for this awesome tutorial...Can you help me with handling two hands within kinect region and customizing hand cursor...I tried opening Generics.Xaml  with Blend but its not getting opened..

Markus wrote re: Kinect Interactions with(out) WPF – Part III: Demystifying the Interaction Stream
on Mon, Jun 17 2013 17:15

Thanks!

I've been trying to work out how to use KinectRegion in XNA the whole day^^

Starting Experience in Kinect | Developer Gone Wrong wrote Starting Experience in Kinect | Developer Gone Wrong
on Tue, Jun 18 2013 4:46

Pingback from  Starting Experience in Kinect | Developer Gone Wrong

Cherry wrote re: Kinect Interactions with(out) WPF – Part III: Demystifying the Interaction Stream
on Tue, Jun 25 2013 8:40

What's the unit and range used in the X,Y/RawX,RawY,RawZ coordinates in hand info? Is it the same - meters - as in skeleton coordinates?

Or is a Kinect Region required to use those coordinates?

Cherry wrote re: Kinect Interactions with(out) WPF – Part III: Demystifying the Interaction Stream
on Tue, Jun 25 2013 9:45

OK I just tested the sample code, obviously the coordinates are not meters, I saw X changed by 1 when hand moved about 20cm.

The problem is I want to write application using both skeleton and interaction. In SkeletonFrameReady I check the application's state and decide whether pass the skeleton data to interaction or process skeleton myself - because I need some pose interaction too, not only hands. How to convert skeletal coordinates to interaction hand coordinates?

vbandi wrote re: Kinect Interactions with(out) WPF – Part III: Demystifying the Interaction Stream
on Tue, Jun 25 2013 10:13

Hi Cherry,

I think the coordinates are more or less relative to the interaction zone, which is dependent of the user, when / where she raised her hand when it became active, etc. If you want data in absolute coordinates, you should use the Skeleton stream.

I don't think you can convert the two, but there's nothing stopping you from looking at both data and working from both.