HoloLens hands-on: The Developer Edition

The shipping hardware is lighter, easier to adjust, and much more comfortable than last year’s prototypes

HoloLens hands-on: The Developer Edition

It's been an interesting year or so following the development of Microsoft's HoloLens. First there was the skeletal prototype unveiled at its January 2015 launch, with an umbilical cord to a PC slung around your neck. Then came the first stand-alone units at Build later that year, where I was able to build and deploy applications to a USB-connected device.

Now Microsoft has started shipping the first developer hardware, and it ran the press through an abbreviated form of the training it's giving the first tranche of developers at Build 2016. Building a new application, we were able to explore some of the newly announced collaborative features with user avatars and shared experiences.

The shipping hardware is a big improvement over last year's prototypes. It's lighter, easier to adjust, and much more comfortable -- and we're finally allowed to take photographs. There are also improvements in the field of view, with more vertical space for the images. It's still a letterbox view of the virtual world, but it is at least a wider one. Images are also sharper, with text and line art clearly visible.

Built-in calibration

We started with a new built-in calibration app. This allowed us to first adjust the device so that we could see the display area clearly, before testing standard gestures in both left and right eyes, ensuring that overlays worked well and the device cameras have a clear view of your gestures. One note about the developer hardware: There's no longer any need to measure and set your inter-pupillary distance in the device settings, instead it's all handled automatically.

The development tutorial Microsoft was running at Build 2016 was again based on Unity's 3D model tools, using scripts to add functions from what Microsoft is calling its HoloToolkit, much of which is available through its online developer resources and can be used in the HoloLens emulator.

Starting with an existing animated 3D object in Unity, we were able to use scripts to place it anywhere in a stage with a simple gesture, using gaze to point a cursor. Once the scripts had been enabled, the app was exported from Unity into Visual Studio, where it was compiled and wirelessly delivered to the HoloLens.

This last step was a big change from 2015, removing the need to tether the device while working with it. Now code can be compiled and deployed while you're wearing the device; all you need is its IP address. There's a version of the Windows Start menu, with Cortana and a few basic applications ready to use out the box; however, I didn't have the time to explore these.

Sharing objects

The first steps of the tutorial were similar to last year's: creating and placing an object on a stage. What was new was the process of sharing an object and creating a collaborative experience, in our case building an application that was shared between eight different users, each with their own HoloLens.

In order to set up sharing, you'll need a sharing service running on a separate device. This is used to host object coordinates, as well as downloading image files to devices. An anchor point is loaded from the first HoloLens to connect to the sharing service, and it's used as a baseline for all subsequent actions and communications. The approach makes sense, as the sharing surface is based on a HoloLens surface scan from its 3D cameras.

A shared object is used the same way as one in a solo experience. You can interact with it using the standard voice recognition and gesture commands. However, as a basic shared object you can only interact with it; not with other users. It's a model that might work well for exploring and sharing architectural models, but it's not really suitable for interactive tutorials or more complex immersive applications like NASA's Mars exploration tools.

Sharing experiences

Collaborative spaces are possibly the most interesting feature of HoloLens, as they take mixed reality and make it a shared experience. You can use them to interact with an object and with other users; whether they're physically present or at the other end of a network connection. This is a big step beyond the solo virtual worlds of most VR headsets, or the basic AR of information overlay devices like Google Glass.

Adding the Discovery service to a HoloLens app, means you can now start to interact with other users. In our demo, we were able to add code that allowed users to select an avatar that would be added to the HoloLens simulation space. This would also allow the sharing service to work with user position to, for example, control how an object was placed in that 3D space. In our demo, we had shared control over an object, which was placed at the center of our shared vision (and which could be moved and replaced by a simple voice command).

Interacting with avatars

In addition, HoloLens' toolkit provides tools for interacting with other users' avatars, allowing a shared experience that's outside of the also-visible physical world -- itself mapped into the HoloLens space using the device's 3D cameras and spatial mapping.

Developers receiving the first tranche of HoloLens hardware over the next few days are going to be pleasantly surprised. Microsoft is delivering hardware that's more mature than expected, with a growing and improving software development environment. It's not perfect by any means, and it's certainly not ready for consumers or business users.

But as the software gets better, and as more 3D applications are developed and shared through HoloLens' dev community, both Microsoft and its developers are going to gain the experience needed to build mixed reality experiences. It's going to be interesting to see what they do with this first set of production hardware and how it evolves over the next couple of years, before it ends up in the office and in the home.

Copyright © 2016 IDG Communications, Inc.