ARKit “just works” on iPhones. How is this possible, and why don’t other systems work the same way? Lets dig into the tech

Apple’s announcement of ARKit at the recent WWDC has had a huge impact on the Augmented Reality eco-system. Developers are finding that for the first time a robust and (with IOS11) widely available AR SDK “just works” for their apps. There’s no need to fiddle around with markers or initialization or depth cameras or proprietary creation tools. Unsurprisingly this has led to a boom in demos (follow @madewitharkit on twitter for the latest). However most developers don’t know how ARKit works, or why it works better than other SDKs. Looking “under the hood” of ARKit will help us understand the limits of ARKit today, what is still needed & why, and help predict when similar capabilities will be available on Android and Head Mount Displays (either VR or AR).

I’ve been working in AR for 9 years now, and have built technology identical to ARKit in the past (sadly before the hardware could support it well enough). I’ve got an insiders view on how these systems are built and why they are built the way they are.

This blog post is an attempt to explain the technology for people who are a bit technical, but not Computer Vision engineers. I know some simplifications that I’ve made aren’t 100% scientifically perfect, but I hope that it helps people understand at least one level deeper than they may have already.

Origen: Why is ARKit better than the alternatives? – Super Ventures Blog – Medium