Yesterday I was lucky enough to get a chance to try out the much vaunted Hololens, which is a completely new device from Microsoft which provides “Augmented Reality” as opposed to the Virtual Reality devices like Oculus Rift provides. The wording may be subtle, but the difference is quite large. Augmented reality projects objects into the actual room you are in, rather than moving you into an entirely different world like VR does.

Hololens is quite the device. It can track the physical space around you very well, and unlike VR, there is no requirement for markers in the room or extra cameras to track movement in the room. It is completely self-contained, and that may be one of its biggest wins of all.

The device on hand was near-final hardware, and the actual device looked exactly like what has been promised for some time. Although we did not get a chance to see the preview device in January, it was apparently nothing like what was presented at Build this week.

However just like January, we were not allowed to take any actual photos of the demonstration units, and all interaction with the device required us to lock our belongings in a locker before we could enter the room. They did however have a unit on display under glass for photo opportunities.

Let’s start with what they got right. Interacting with Hololens is very easy. There were only a couple of commands needed, and things like the air tap were very simple to use, and not once did I get a missed reading. That is extremely impressive considering it is just seeing my finger move in free space. When you want to interact with something, there is almost a cursor in the center of your field of view that you just focus onto an object. The object is then highlighted, and there will be no mistaking which object you are going to interact with.

Another interaction method was using a mouse, and when looking at a PC, you can simply drag the mouse off the display and the mouse cursor just moves into free space. In my demo, which was based on an architecture theme, this allowed me to interact with the model and move walls around, and change the design.

Another cool feature was the ability to leave virtual notes. Looking at a wall, I could see that someone had left me a note, and with a simple air tap I was able to hear what they had left. Then I could leave a note of my own on the wall for that person to see later.

Another win was the actual device itself. You put it on somewhat like a welding mask, and you just tighten the band on the back of your head with a wheel. Hopefully the durability of the devices is fairly robust, because we were helped out quite a bit to get the device on and off, but that kind of makes sense with the volume of people coming through the demo.

So what did it not deliver? The actual holograms had a very limited field of view. With the demos we had seen on the keynote, you could see holograms all around you, but the actual experience was nothing like that. Directly in front of you was a small box, and you could only see things in that box, which means that there is a lot of head turning to see what’s going on. On my construction demo they provided, I was supposed to look at a virtual “Richard” and I was asked if I see Richard. I did not. There was a bug with Richard and he was laying on the floor stuck through a wall. I understand these demos can have bugs, but it was very hard to find where he was with the limited field of view.

This demo is almost nothing like what you actually see in the device

The holograms themselves were very good, but they were so limited in scope that I can only hope that some work can be done there before the device goes for sale. There is a tremendous opportunity here and it would be awful for it to be spoiled by poor hardware. Although I didn’t get a chance to see the January demo, I’m told by several people who did that the field of view was a lot better on those units.

So my expectations were not met, and I can attribute that to the demos that were provided online and during the keynote. The actual experience was almost nothing like that, and what was shown on stage was amazing.

One thing that I wanted to know was what kind of hardware is inside, but there were zero answers to that right now. The device itself looked good, it felt good, the audio was good, but the main attraction still leaves a lot to be desired.

Comments Locked

33 Comments

View All Comments

  • steven75 - Friday, May 1, 2015 - link

    I know it's not released yet, but this experience makes it sound like a typical Microsoft style "over-promise and under-deliver" product.

    Microsoft has a history filled with amazing fantasy vaporware products that demo or release with 10% of the promised capabilities. One would think that *eventually* they'd stop doing this to themselves.
  • Kracer - Friday, May 1, 2015 - link

    No practical application has been ever devised (as far as I have heard) for this device. If the PR people can't make one up, I don't think one exists.
  • Brett Howse - Saturday, May 2, 2015 - link

    I have to disagree there I think there are a pile of great applications for this. The Trimble design demo that I did was an obvious use case for something like this. Like VR, there are plenty of education possibilities too.
  • Morawka - Friday, May 1, 2015 - link

    wow everyone is quick to shit all over hololense once the first negative experience is blogged about.

    calm down folks.. this thing is over 6 months from release.. Hell apple would only do demo loops of the apple watch (no touching, no interacting)

    give them time, this stuff is truly bleeding edge tech.
  • thomasxstewart - Saturday, May 2, 2015 - link

    Holo lens article in NYT isn't much about holo, yet does story on Microsoft. http://www.nytimes.com/2015/05/03/technology/micro... Worth read ,I did Microsoft research from 2004 to 2008, XPsp3 to 7/8. Taking clunky service pack to new high, then large form factor and developing small footprint, which used Dell into libraries, so Truth, so much. Now doing Media Theatre at:U of Mn.And PE.

    Not pair of glasses with projection, more party favor and cool for school honors rebuff. Maybe this season 7 Dwarfs from Sanderson, put into glasses could be EZ way to get playwrights work up to snuff . Private TS stuff gets public as Machines meet match. Paradigm Shift, New revolution on brink. Notice uptick of writer in Times and selling bit of Fab.
    Drashek
  • Morawka - Saturday, May 2, 2015 - link

    ever heard of adjectives and conjecture?
  • microsofttech - Saturday, May 2, 2015 - link

    its still in development which means that microsoft is still working on it why are you people complaining on a device that is still being made
  • Antony Newman - Saturday, May 2, 2015 - link

    Brett,

    Thanks for the insight.

    Q1) Did you get a feel if there was a lectern lag?
    Q2) Any thoughts on what resolution image you were seeing?
    Q3) Do you think that colours looked rich and saturated?
    Q4) Were you aware of a framerate?
    Q5) Did they talk about using the Hololens to SEE the image output from another device (like a Camera recording an image)

    Thanks,
    AJ
  • uhuznaa - Sunday, May 3, 2015 - link

    This thing of course is a compromise that tries to circumvent some problems by creating and accepting others.

    Let's look at a hypothetical "perfect" AR/VR setup:

    You have two displays closely in front of your eyes. The displays have high resolution (like 4K or more for each eye) and instead of a big lens setup you have adjustable micro-lenses integrated with every single pixel. The display sits on a multi-layer die and the other side actually is an image sensor with a similar setup: Same resolution, micro-lenses on each pixel. The layer between these carries the CPU/GPU, RAM and memory. Each pixel in the sensor has a straight pathway to the display pixel (on the other side) it represents (with the GPU having its hands on all of them), so you can get a lag-free real-time (but still digital and controllable) 3D display of what the sensors on the outside see. Object tracking and 3D scanning is done purely optical via the spread of the sensors and a very precise measurement of the distance between them (maybe by fibre optics and lasers). The CPU and GPU now can render objects and UI elements freely into the 3D image your eyes see or even totally replace the view with something else. Also continuously scanning of the world before your eyes is used to create a full 3D-map of everywhere you were, with a growing database of information about everything you ever saw.

    From the outside this would look like an ordinary pair of (black) glasses (if you can figure out wireless power transmission). Or you integrate also an display in between the sensor pixels on the outside, so you can display something there. Hey, Google: You could display ads here! ;-)

    Of course also integrating image sensors on the inside would be nice too, so you could map (with infrared light) the eyeball of the user and could automatically correct for all optical weaknesses of that wetware. 20/20 eyesight for everyone! Now that both sides are sensors and displays you could make both the same (saves costs anyway). Now you can dial down the display on the outside to infrared, you can illuminate even darkness with infrared, "see" it and display it in visible light on the inside. 20/20 in darkness! Easy.

    Then, shrink that design down further and finally do the very same gadget as contact lenses.

    Compared to that the "Hololens" of course is a laughable crutch. It's a start though and I'm looking forward to it.
  • DrChemist - Sunday, May 3, 2015 - link

    People here continually talk about the possible hardware in the sense like it is like oculus display actually full screens onto you view. Instead it is only displaying small portions of things within your view. It doesn't need high end computing to do. This is what augmented reality is. All the phones now a days use these type of views with cameras and GPS. It doesn't require much more than a way to notice the depths and measurements of your surroundings. Stop trying to fit into the box that others are on the market already and starting thinking outside the box of what this will do. I am absolutely sure that the final product will be great and will not be exactly as clear and crisp as what they show. Much like google glass does but with exponentially more use for things like games. I am absolute in the idea that this will destroy google glass.

Log in

Don't have an account? Sign up now