Select the directory option from the above "Directory" header!

Menu
Introducing Azure’s augmented reality: spatial anchors

Introducing Azure’s augmented reality: spatial anchors

Microsoft brings cloud-based cross-platform AR services to its HoloLens, PCs, and other devices

Microsoft is back in the augmented-reality game, with the launch of both its HoloLens 2 headset and its Azure Kinect camera. Lighter and more powerful than its predecessor, HoloLens 2 is not only a standalone device but can be linked to a set of Azure services that take mixed reality (Microsoft’s term for augmented reality, or AR), into the public cloud.

Azure’s first tranche of services are intended for use with any platform, going well beyond Microsoft’s own tools. Even with a $1,500 price reduction over the original HoloLens, at $3,500 HoloLens 2 isn’t going to be a device that you hand out to every maintenance worker or to everyone taking a training class. With AR-ready devices in pockets and virtual-reality (VR) capabilities on desks, all your users can be part of an experience without the cost of headsets.

Using a mix of AR and VR devices makes a lot of sense. At last year’s finals for Microsoft’s student development competition, one of the more interesting projects was a training app for firefighters. Users had a full VR experience, exploring a burning building and using different firefighting tools. Meanwhile, a trainer used a HoloLens to monitor their progress through the simulation, with a view of the VR model displayed on a table.

That mix of technologies is at the heart of Azure’s new tools. Instead of holding everything that’s need to build an environment in a standalone device like HoloLens or a VR-ready PC, the public cloud holds both your models and a way of fixing those models to a specific physical location. Once that data is in Azure, you can access it with Apple’s ARKit and Google’s ARCore, as well as Microsoft’s own tools.

At the heart of the new platform are links that tie together the physical and the virtual. Microsoft calls these links spatial anchors. They’re maps that lock virtual objects to the physical space that hosts the environment. They provide a link that can be used to show the live state of a model across multiple devices. Models can be linked to other sources of data, providing a display surface for internet of things or other systems. There’s an option of adding an additional layer of security by tying role-based access controls to a map so that specific capabilities are linked to specific users.

Building spatial anchors

Spatial anchors are deliberately cross-platform, with key dependencies and libraries for client devices available through services like CocoaPods, and with sample code in native languages like Swift. You need to register the appropriate accounts in Azure too, so that code can authenticate against the spatial-anchor services. Microsoft is continuing to use Unity for its tools, though a recent announcement indicates that support for EA’s Unreal will come soon.

To use the service, you first need to create an appropriate Azure service for your application. Azure’s patial anchors support Microsoft’s existing mobile back end as a service tools, so the learning curve isn’t too steep, and it has familiar pricing models. Once you have an Azure App Service instance up and running, your client apps can communicate to your spatial anchors and models via REST APIs.

At the heart of spatial anchors is a map of the environment where your AR content is going to be hosted. That may mean using an app to locate a user in an environment and then generate a map of the area. Some devices, like HoloLens, do this automatically. Others need you to scan the area to build the map. Anchors are created using an app’s own AR tools and then stored as 3D coordinates in Azure. Anchors can have additional information associated with them, using properties to determine what’s rendered and how different anchors are linked.

There’s no need for spatial anchors to be permanent; they can be given expiration dates. Once an anchor has expired, it is no longer visible to users. You can also delete anchors completely, such as if you don’t need to share specific content any more.

Getting the right experience

One interesting option for spatial anchors is in-building navigation. Once spatial anchors are linked, and you have a map of a space (which can be an entire building), you can generate a navigation among linked anchors. Guidance hints can be displayed in your app, such as by using arrows to signify direction and distance to the next anchor. By placing and linking anchors in an AR app, you give your users a more natural experience, with indicators placed where a user would expect to see them.

Getting your anchors right is very important, because it is a very immersive experience and you don’t want to alienate users. Microsoft’s guidelines suggest that anchors need to be stable and associated with physical objects. You need to consider how they’ll be viewed, looking at them from different angles to make sure they can be understood by your users, and to ensure that access isn’t affected by other objects in a space. After all, you don’t want users falling over a table while they try to read a notice on a wall. Targeting initial anchors to specific entry points simplifies things too, making it a lot easier for a user to enter your experience.

Rendering 3D content in Azure

Microsoft plans to launch a remote rendering service, using Azure to deliver fully rendered images to devices. Building a convincing environment requires a lot of detail. While the hardware in HoloLens 2 is a significant upgrade, it’s still not easy to deliver a full rendering of a piece of industrial equipment in real time. You’ll need high-bandwidth connections and the remote rendering service so high-resolution images can be pre-rendered and then delivered to users. The same model can be shared across devices, rendered once and used many times.

There are two types of device: tethered and untethered. Tethered devices can take advantage of the GPUs build into modern graphics workstations, displaying fully rendered images. The lower-end GPUs in untethered devices (even with enhancements like HoloLens’s HPU) cannot handle complex images, and the resulting “decimation” delivers fewer polygons and compresses image content.

GPUs have been in the public cloud for a while now. While much of Azure’s Nvidia GPU support is focused on CUDA and large-scale cloud hosted compute, it offers a series of NV-class VMs intended for use as rendering hosts and for cloud-based visualization apps.

Azure Remote Rendering is currently in a private beta, and pricing has not been set. The likely product offering is a service based on NV-series hardware, using common file formats and a general-purpose rendering tool. By taking that capability and using it with HoloLens and other devices, you could offload compute and power-intensive work from portable devices, while still being able to deliver high-fidelity images.


Follow Us

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.
Show Comments