Spatial Placement and Understanding

Back in 2016, Fragments was my favorite demo on the HoloLens platform. In fact, it still is today. There’s just so much about it that I still find fascinating. The way my living room was mapped into a crime scene, the way evidence might be hidden behind furniture, and even the way you interacted with virtual characters.

When I looked over to see Director Kirkland sitting on my own couch, my jaw dropped, and I thought “How’d they do that!?”

We now know that Asobo Studios used an unannounced library to implement these features on HoloLens 1. These capabilities would eventually be released as a library called Spatial Understanding, and the library was amazing. It could make suggestions for where to hang virtual art, where holograms of a specific size would fit in a space, and even which planes were “sittable” by NPCs. Here’s a video of how that library worked from a developers standpoint:

Sadly, when HoloLens 2 was released Spatial Understanding didn’t make the cut. Microsoft shipped a similar but simpler library called Scene Understanding. Scene Understanding allows developers to find walls, ceilings, floors, etc. but the intelligent placement aspects of Spatial Understanding were missing. I still hope that a library like Spatial Understanding will reemerge someday.

The original article below still serves as a good description for how to use simpler ‘Plane finding’ functions in modern toolkits, but if you’re interested in reading up on the original HoloLens 1 Spatial Understanding I recommend this excellent article by Mike Taulty, and if you’re interested in HoloLens 2 Scene Understanding, those the docs are here.

Original Article: Fragments Found My Couch, But How? (2016)

As it seems is the case with most juicy bits right now, that code can be found in the HoloToolkit for Unity. Specifically the two files involved are PlaneFinding.cs and SurfaceMeshesToPlanes.cs. PlaneFinding is the lower-level of the two and its job is to interface with the SDK and hardware. SurfaceMeshToPlanes is what most applications will actually interact with to get work done.

SurfaceMeshToPlanes offers some really useful features. For example it has a two properties FloorYPosition and CeilingYPosition. They tell you the lowest and highest points scanned in the room, respectively. It also has a method called GetActivePlanes that takes in a PlaneTypes enum and returns a list of Unity GameObjects that match.

The PlaneTypes enum is defined in SurfacePlane.cs as:

/// <summary>
/// All possible plane types that a SurfacePlane can be.
/// </summary>
[Flags]
public enum PlaneTypes
{
	Wall = 0x1,
	Floor = 0x2,
	Ceiling = 0x4,
	Table = 0x8,
	Unknown = 0x10
}

So you can probably see where this is going.

I don’t have access to the source code for Fragments, but I’d bet money it works something like this:

  1. Call SurfaceMeshToPlanes.GetActivePlanes, pass in PlaneTypes.Table
  2. Look for “tables” that are somewhere between 16″ and 24″ tall
  3. We already know where the floor is, so we put the Directors feet there
  4. Next, we plant her butt squarely on the “low table” (which is the couch)
  5. Finally, we use Inverse Kinematics to calculate where her knees should go

That’s actually pretty simple, and some really cool stuff!

You May Also Like

One thought on “Spatial Placement and Understanding

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.