AR needs to be social, contextual and location-aware to become the next major computing platform
The mobile era put the power of PCs and the Internet into our pockets. We were suddenly gifted with the ability to search for answers and consume information on the go. Augmented reality has been heralded by many as the next major computing platform shift.
But for AR to reach this stage, it cannot just be a marginal improvement over mobile. In fact, if it were, AR would not gain enough velocity to escape the orbit of the mobile paradigm and cross into mainstream adoption.
AR in its essence allows us to access knowledge at its source — timely, frictionless, and screenless.
Since snazzy AR glasses aren’t available yet to solve the screenless part, we can at least address the first two factors. To that end, Apple and Google’s ARKit and ARCore respectively are steps in the right direction, but they only allow the creation of novelty features that add marginal value to apps native to the mobile era. However, we’re still in the early days of the AR platform creation phase, so there’s plenty of work to be done. The goal of this post is to illustrate one possible AR user journey, why ARKit alone isn’t enough, and what’s needed to rocket boost us to AR’s true promise.
Take a walk through an AR enabled city
It’s a bustling Saturday and you’re emerging from the subway to meet your best friend for lunch. As soon as you reach your stop, a green line appears before you leading you through the station, onto the streets all the way to the restaurant. As you walk through the station, the blank walls come alive with virtual storefronts of brands that cater to your interests. You notice a collection of jackets inside the H&M section and ask your AR glasses how much they are and save them in your wishlist to revisit later.
Soon you’re at street level and digital overlays highlighted outlines appear of points of interest that your friends have mentioned to you on Facebook or reviewed on Yelp. A tourist cable train passes by and you see a map of its route on the side facing you. Suddenly, you backtrack a few steps upon realizing that you had just passed by the Pokemon egg your team had left for you behind the fountain. You leave behind a big heart emoji to show your thanks. By now, the green line is flashing yellow, reminding you to hurry and not be late.
That’s the vision of an augmented city, where the digital, both useful and fantastical, blend seamlessly with the real. In the words of Tim Cook, “augmented reality is going to change everything”. However, with today’s tools, we’re just not quite there yet. And to understand why, we first have to examine ARKit’s capabilities and limitations.
Today’s typical ARKit app
Let’s start with the same scenario above except this time you’re walking down the street with your ARKit-based app on your phone. The app internally starts detecting horizontal planes like the ground or the tops of cars that are up to 15 feet in front of you. The app uses these detected planes as anchor points to place 3D digital objects so that the object stays roughly in place even if you’re moving your phone around it.
However, you’ll notice a few things about this ARKit app:
- The digital objects aren’t persistent and nor social. This means that the objects disappear after the user closes the app and aren’t viewable to another person who walks by the same place with the app. Simply put, no persistence = no social interaction = lonely AR experience.
- You have to rescan the scene every time to have an AR experience because no models of the world are saved, so the app starts with a blank slate of its surroundings upon every relaunch. This adds a lot of friction to the user experience.
- The app also won’t be able to detect vertical planes or planes a street away (the camera just can’t see that far). If the app can’t detect planes then it can’t interact with them.
- Moreover, the app relies on GPS and Compass/Magnetometer in the phone for estimating your location and placing digital objects in the right place. This can be unreliable especially in cities (remember those times using Google Maps when you were streets away from your actual location), that means a Starbucks rating might erroneously be placed on top of the McDonald’s 20 meters to the right.
Because of these limitations, most AR apps are limited to flat surfaces or so-called “tabletop AR experiences”, which for the most part have proven to be novelty products. If they do fulfill a niche, the use cases don’t drive daily usage — after all, would you really use the IKEA app to visualize where to put your furniture every day?
In order to make the jump to the mainstream, AR needs to be contextual, timely, persistent, scalable, and power efficient. The breakthrough solution needed is known amongst the AR community as the AR Cloud, which industry veteran Matt Miesnieks defines as “a machine-readable 1:1 scale model of the real world. Our AR devices are the real-time interface to this parallel virtual world which is perfectly overlaid onto the physical world”.
Once this technology is here, a lot of magical things can happen. At Sturfee, we are building a version of the AR Cloud on the world-scale.
How Sturfee supercharges ARKit
In a nutshell, the Sturfee AR Cloud makes ARKit/Core much more robust for location-based AR applications. Here’s how we tackle some of the limitations of the current platforms:
We map out an exact 3D model of the real world using satellite imagery (view from the sky) and continue to monitor and convert geospatial features like buildings and cityscapes into a machine readable format. The Sturfee engine then uses visual processing to analyze the features around the you (ground view) to instantaneously and accurately locate where you are and where you’re facing. This is done through any camera, whether it’s a smartphone, headset, or even drones.
Now, ARKit can use these Sturfee-derived geospatial measurements to reduce the positioning errors introduced by the phone’s GPS and internal sensors, and accurately detect street-level 3D features like changes in terrain elevation, building surfaces (vertical planes), and even trees. As an added benefit, by limiting ARKit’s mesh computation to a few limited objects in the scene (i.e. people and cars) the app can make huge savings on battery power.
Unleashing world-scale AR
So in brief, Sturfee enables that same ARKit app to fully exploit the potential of the your surroundings and unlock a host of creative design and social engagement applications. So now instead of Pikachu just walking across the ground, imagine it jumping off walls and hiding behind statues. Or a Snapchat Star Wars filter that causes porgs to dash across the park, jumping over benches rather than just fluttering around on your table. Below is some footage from apps we’ve built in-house:
Are you ready?
As we race towards our Q1 2018 SDK launch, we’ll share a series of posts that’ll reveal our vision for an AR future and highlight some cool things that are possible with Sturfee’s tech to get your creative juices flowing. Here are some examples:
- Build augmented cities. Think of the cool parts of Minority Report minus the creepy precogs.
- Assist those who build and protect our cities, from architects and city planners to fire fighters.
- Optimize HD mapping efforts and drive one of the key building blocks of autonomous vehicles.
- Enable Drone delivery fleets navigate dense urban environments where metal and tall structures interfere with sensitive IMUs.
- Help NGOs and governments determine where to deploy support resources.
If you want to build amazing world-scale AR experiences (and beyond), partner with us, or join our team, we should connect.
Supercharge ARKit & Core Apps with World-Scale AR Cloud was originally published in Sturfee on Medium, where people are continuing the conversation by highlighting and responding to this story.
Gurupriyan is a Software Engineer and a technology enthusiast, he’s been working on the field for the last 6 years. Currently focusing on mobile app development and IoT.