Last week I did a post on my initial reactions to using AR apps on the iPhone.
My overall impression was that I’m not going to get too excited about this first generation but I am looking forward to what’s coming.
Watching this future-gazing video made me realise a couple of reasons why AR still has a long way to go. (WARNING: gratuitous use of Coldplay!)
I said last week that one of the main shortcomings of the current crop was that they understood the spatial environment in terms of direction and distance. In real life if we’re navigating from place to place we do it along lines (roads, paths etc) and via nodes (junctions, bus stations) and avoiding barriers (rivers, walls, dodgy areas of town). So, two things; AR needs to be able to understand maps and REALLY good AR needs to understand MY perception of space by looking at my behaviour over time and recognising my choices.
The other issue struck me when I saw how “fixed” the points of interest were in the simulation. Not only did they not have the movement latency you get with the apps I’ve had a go with but they were pointing to doorways, adhering to the surface of walls, clinging to the edges of books etc.
From a technical point of view I guess a few things need to develop for that to happen:
- Processing speeds and connection rates in devices need to be substantially quicker
- GPS positioning and directional sensing need tightening
But more importantly, the app needs to be able to see the world as I read it, a little bit like facial recognition – spatial recognition, maybe. When I look at a shop I can recognise where the door is, I can read entrance and exit signs, I can distinguish between a war memorial and a dustbin. To get AR as good as it shows in the video the app needs to be able to do that too and that is a loooong way off!
AR apps need to understand better how us squishies use and understand our environment so I withold my satisfaction and unbridled excitement until that starts to happen. (Although I still think it’s all quite cool…)