Creating Great AR Experiences 

Session 805 WWDC 2018

Engaging AR experiences are easy to start and navigate, persuasively realistic, and highly immersive. Learn best practices for successfully bringing people into an AR experience, teaching them about how to interact and engage with virtual content, and making your AR content look beautiful and grounded in the real world.

[ Music ]

[ Applause ]

Hey, everyone.

Welcome to the AR design session.

My name is Grant Paul, from the human interface team at Apple.

And, I hope you’ve been having a great WWDC this week.

So, for this session, we’re going to start off by, I’m going to talk about how to design great AR apps and games with the user interfaces and interactions.

And then, Omar’s going to come up, and he’s going to tell you all about making 3D models that look great in AR, and feel great in AR.

So, before we get to that, I want to quickly talk about the basics.

So, if you’re new to it, you might be wondering exactly what it is that we mean when we talk about AR.

But, even if you’re an AR expert, I want to talk about what we mean for this session.

So, AR, of course, stands for augmented reality.

And, let’s break down what that means a little bit.

And, we can start with reality, because it’s a little bit easier to get into.

So, reality means is that AR deals with things in the real world.

And, that’s a little bit different than other things we might have done on our devices, that take place on the device, or they take place on the internet.

But, with AR, things happen in the real world.

They happen in the room around you, in the environment you’re in, or in your place on the map.

Where you are.

But, the important thing is, it’s a little bit different.

And the other part of augmented reality, is augmented.

And, that can mean a few different things.

So, augmented can mean it can mean augmenting what you know about the world.

It can mean getting information that the device can understand about the world, and giving that to you.

It can mean placing virtual things out into the world, giving the virtual this physical context.

And, it can mean taking something that is real, like your face, when you put an animoji on as a mask.

Taking something that is real, and enhancing it, augmenting it.

So, that’s what we mean when we talk about AR, when we talk about augmented reality.

And, with that, I want to get to the first half of this session.

I want to start talking about how to design interfaces, and design interactions for your AR apps and your AR games.

And, the first part of that is to talk about how to get people into AR.

How to help guide people into AR.

So, I’ll show you how iOS 12’s built-in AR experiences help guide people into AR.

And then, how you can take those principles, and apply them in your own apps.

Next, we’ll talk about the different ways you can present content in AR.

The different possibilities that ARKit opens up, and also some tips and tricks that are great no matter what kind of AR app you’re making.

And finally, we’ll talk about interactions in the world.

It’s a little bit different when you’re building an AR app than when you’re building a 2D app, for what kind of interactions make sense, and what kind of interactions you’re going to want to use.

So, we’ll figure out what still works great in AR, and then, where we might need some new kinds of interactions.

But first, I want to get started with talking about how to get into AR.

And, what I’m talking about here is after somebody’s downloaded your app, after they’ve found the AR experience in the app, they’ve opened it up, what I’m talking about here is when ARKit needs to understand the world.

Because for every AR app, ARKit needs some level of understanding of the world in order to start the AR experience.

In order to get it going.

Because every AR app needs to needs that understanding in order to place its objects into the world, or show the user that information.

And, the way ARKit builds that understanding of the world, is by getting you to move your device, by having you move around.

And, that’s a little bit different from other places that you might have looked through a device, seen a camera preview, in the past.

Like, for example, when you’re taking a photo, you just need to point the device where you want to frame the shot.

But, in AR, you really do need to start moving, looking at the same place from different positions, and looking at it from some different angles.

So, the trick here is to let people know what they need to do.

Let them know how to move, that they need to move their device.

And, the way to do that is to give them a fixed reference.

Give them something to base that understanding off of.

And, let’s talk about that.

Let’s take a look at an example.

So, this is the game Euclidean Lands.

And, what it’s showing here, is the device moving around inside of a room.

And, that’s really great, because you can see here, without any text exactly what it is you need to do, that you need to move the device within that room, and just turning the device to look at a different angle isn’t going to be enough.

So, without any text, it’s really clear from that fixed reference of the room, exactly what you need to do.

And, usually, most of the time, that is all you’re going to need.

ARKit is even faster to build that understanding of the world in iOS 12.

So, in even more cases that is all you’re going to need.

You just need to start moving, and you’re ready to go.

But, of course, there are some situations that aren’t quite as well suited to AR.

Maybe you’re in a dark room, or it’s really reflective, and it’s not as easy to for ARKit to start building that understanding.

So, it can take ARKit a little bit of time to get ready.

It might not happen immediately.

And, in those situations, if you’re still being told to move your device, you’re still looking at something that says to you, move your device around, people might start to wonder, why is the app not working?

Does the app not understand the movement that they’re making?

And, it might get start to get confusing.

So, what you need is some kind of feedback that lets people know that they’re not doing anything wrong.

They’re doing exactly what they should be doing to get into AR.

And, they should just keep doing it.

So, let’s take a look at an example.

So, this is how iOS 12’s built-in AR apps help guide people into AR.

You can see the device moving over the fixed reference of that surface, by showing you that you need to move it.

You can’t just keep it in one place, or rotate it.

And then, once you’ve started moving the device, the surface transitions into a cube.

And, that cube spins with the movement of the device, giving you real, connected, direct feedback that you’re on the right path, and you’re doing the right thing.

And then, once ARKit has built that understanding, it’s all ready, the cube spins away, and you’re ready to go in your AR experience.

So, that’s how the built-in apps in iOS 12 help guide people into AR.

And, your apps should follow those same principles.

They should help people know that they need to move their device, and give them that feedback that they’re doing the right thing.

But, your apps don’t need to follow that same style.

They don’t need to look like line drawings.

AR should feel like an integrated part of your app.

It should feel native to your app’s style.

And, it shouldn’t feel like something that’s attached on, or this flow is something that’s added on after the fact.

So, the important part is to help people down the right path, but it should feel like part of your app while you’re doing that.

And, the last thing I want to talk about for helping get people into AR, is that it’s important to balance your instructions, your guidance, how you’re helping people get into AR with being really efficient when people already do know what to do.

So, if somebody already knows what to do, they’re just going to start moving their device right away.

They don’t need any kind of instructions.

They don’t need anything to tell them to do that.

So, don’t make them sit through any kind of guidance.

Since ARKit is so much faster, in a lot of cases, they’ll start moving, ARKit will understand the world, and they’ll be ready to go right away.

Alright. So, that’s getting into AR.

That’s how you help guide people into your AR experiences.

Now, let’s talk about how to present your content in AR.

The different ways that you can show things in AR, and tips and tricks for all kinds of AR apps and AR games.

And, I want to start with some of the more radical possibilities of things you can build in ARKit.

Because not every AR experience has to look like looking through your device, seeing a camera preview of the world, and then placing objects out into space.

There are other options as well.

So, one of those options is to build an AR experience entirely in two dimensions.

You don’t need to show a camera preview.

You don’t need any kind of 3D graphics.

You’re using that information that ARKit gets about the world to present a great AR experiences entirely in 2D.

So, let’s take a look at an example of that.

This is the game “Rainbrow.”

And, in Rainbrow, the way you control you character is by moving your eyebrows up and down.

[ Game Music ]

[ Applause ]

So, there’s no need of any kind of camera preview here.

There’s no need for any 3D graphics.

It’s a lot more fun entirely in 2D, but it’s still a great AR experience.

So, that was AR in 2D.

But, another way, another thing you can do with ARKit is you can build things that I would consider to be full virtual reality experiences.

And, what I mean by that, is that the experiences take you to a different place.

They really make you feel like you’re somewhere else.

You can walk around the environment.

You can move around within it.

You can look in all directions.

And, to me that makes something really into a virtual reality experience, even if it is through a device.

And, there’s some benefits to that.

You don’t need any extra equipment.

It works everywhere.

You don’t need to wear a headset on your head.

It works everywhere you are with the device you already have.

You don’t need any kind of trackers or anything else.

And, there’s some benefits to looking through a device rather than being fully immersed.

Like, you’re never going to accidentally walk into a wall because you always have your peripheral vision.

So, ARKit can be a great way to make virtual reality experiences.

We’ll take a look at one.

So, this is the virtual reality experience called “Enter the Room.”

And, it was developed by the International Committee of the Red Cross.

And, in this experience, once you walk into the room, you can look around in all directions.

You can move closer to things.

You can inspect them.

You can move farther away.

And, the sound comes in, and it feels like it’s coming in from outside the room, from all around you.

And, that makes it into a really powerful experience.

And, that’s what you can do with virtual reality using ARKit.

So, those are some of the more radical options of things that you can build with ARKit.

We can also look at some tips and tricks that make sense no matter what kind of app it is that you’re building.

Whether it’s a game, or a productivity app, these can work for everything.

So, the first thing I want to talk about here is showing text in AR.

Because text is really important.

All kinds of AR apps have different reasons to show text.

If it’s a game, maybe you want to show the title of a level, or some instructions.

Or, maybe for other kinds of apps, you want to label something in the world, label a virtual object.

Maybe show some annotations.

But, the important part, no matter what reason it is that you’re showing this text, is to keep that text readable.

Make it really easy to read.

So, the simplest way to show text in AR, is to put it out into the world, to put it in perspective.

And, you know, that can look really cool.

But, it also has some drawbacks.

And, one of those drawbacks is what happens when you look at it from an angle.

The letters can, sort of, squish together.

It can be a little hard to read.

And, another issue is what happens when you take a step back, when you’re looking from further away.

The text can get really small.

It’s like trying to read a piece of paper from all the way across the room.

So, if you’re showing titles, or other kinds of things that maybe people already know, or could get in another way, it can be really cool to show that in perspective.

But, if you’re showing something that people really need to read, need to get information from, you might want to try a different option.

So, a different option that you can use, is to show your text in screen space.

And, what I mean by screen space, is that the text is always the same size.

It’s always facing you.

It’s always head-on.

And, that makes it really easy to read.

It doesn’t have those problems of what angle you’re looking at, or how far away from it you are.

But, the important part of showing text in screen space, is that it’s still attached to a position in the world.

It’s still fixed to the place in the world that you attach it to an object, or attach it to some physical feature.

And, that makes it really feel like part of the AR scene.

So, screen space text is a great way to label things, put some annotations into AR, but still make them really readable.

So, here’s an example of screen space text.

And, this is from Measure, which is part of iOS 12.

When Measure shows the measurements, it shows them in screen space.

So, no matter what angle you’re looking at those measurements from, or how far away you are, they’re always incredibly readable.

So, screen space text is great for readability.

But, you should still try and keep how much text it is that you’re actually showing in AR to as little as possible.

And, the reasons for that is when text is placed in the world, you always have to keep pointing your device at it in order to read it.

If you turn your device to a more natural reading position, the text is going to go away.

So, if you have more detailed text, if you’re showing details about some kind of object, or something in the world, you should show those details on the display.

And then, people can use all that experience that they built up using iOS and reading things on their devices, to read those details directly on the display.

And, when you’re showing those details, when you’re coming out of AR, it’s important to have a transition.

Because those transitions can make it really clear what it is you’re looking at.

What text, what object it is that those details are referencing.

What it is that you’re getting details about.

So, let’s look at an example there.

And, this is also from Measure.

In Measure, when you tap on a measurement, it comes out of AR to show the details about that measurement flat on the screen.

And that’s great because it’s really easy to read.

You don’t have to point your device or your phone at the measurement in order to read it.

But, it’s also really clear, because of the transitions.

The transitions show you what measurement it is that you’re looking at details for.

And, they comes out of the measurement, so you’re never going to be confused.

So, transitions.

They’re really great when you’re coming out of AR, and going back into AR, for showing details on the display.

But, they’re also really important when you’re just showing objects in AR.

And, that’s because it makes it feel like there’s one version of the object.

It makes it feel like that object has its own identity.

And, that’s important, because things in AR are so physical.

They have that physical sense.

When you see them in AR, they look real.

And, things in reality, you can’t just copy them.

You can’t just make multiple copies.

So, it’s important to keep that same principle when you’re showing objects in AR.

So, this is what happens when you Quicklook at objects in AR.

And, it shows a great example of keeping that identity.

When you switch from the object tab to the AR tab, the object stays in place.

It always stays visible.

It doesn’t disappear and then come from somewhere else.

And, even when you’re deciding where to place the object here, it always stays visible on the display.

So, it’s really easy to see that there’s one version of this object.

And, even when you go back into the app that you were Quicklooking the object from, it still shows the object transitioning back to where it came from.

So, it feels like there’s one object moving between different parts of your app.

And then, into the world, and back out of the world.

It doesn’t feel like there’s multiple copies.

Alright. So, that was a bunch of information in a row about the different ways that you can present your content in AR.

So, let’s do a quick recap of that.

So, first, we looked at the different ways you can make AR experiences, and we talked about using AR to create entirely 2D experiences, with no camera preview, and no 3D graphics.

We talked about how VR can be a great way to build experiences with ARKit, and make you feel like you’re somewhere else, that can be really powerful and really immersive.

We talked about using screen space to show text in AR, to make it really readable from any angle, and readable from any distance.

We talked about showing your details on the screen, so that they’re easy to read without pointing your device at some specific place in the world.

And that you can use all of that same knowledge that you’ve built up reading things on iOS.

And finally, we talked about transitioning into AR, and out of AR, for showing details flat on the display.

But, also to give objects that sense of identity, that sense of physicality that’s so important in AR.

Alright. So, that’s presenting content in AR.

So, different ways that you can present it.

And some tips for different types of content that you’re going to want to show in your AR apps.

Now, let’s talk about how we can interact with that content.

Let’s talk about interactions that make sense in the world.

And, let’s start with touch, because touch has been really important all the way back to the beginning of iOS.

Multi-touch was there right from the start, and it’s been the most important way to interact with our devices.

And the reason touch is so important, the reason touch is so great, is that it enables direct manipulation.

And, direct manipulation is when you interact directly with things on the screen, like they’re physical objects.

You’re not using controls to scroll or to pinch to zoom.

You’re interacting directly with the content.

It’s like there’s a physical thing that you’re manipulating.

And, that’s even more important in AR.

Because in AR, as I said before, things are really physical.

Objects feel like they’re real.

They feel like part of the real world.

So, it’s really important to use direct manipulation to make it feel like you’re interacting directly with those objects.

And, direct manipulation is also great because it uses gestures that you already know, that you have experience with from iOS.

Because those gestures will be the same as any other content on iOS.

They’re things that you’ve been using for probably a long time.

So, the first one there is how you can move objects in AR with direct manipulation.

If you want to move objects, you just put your finger down and drag them to a new place.

And, it feels like you’re picking up the object, because it stays under your finger.

You get this physical connection to the thing that you’re moving.

Another gesture you can do, is scaling objects.

So, in AR, things start out at their physical size, at their natural size.

But, if you want to change that, you can pinch the object to make it bigger.

And, you can pinch out on the object if you want to make it smaller.

The important things to think about when you’re scaling objects in AR, is to give some feedback when you’re doing it.

Because the change is really, really big when you take an object and scale it up to 4 times the size.

So, it’s important to give people feedback, so they absolutely know what happened.

And, the second thing is to make it really easy to go to take the object back to its natural size.

Back to the size that it would be in the physical world.

So, you can snap back to 100%, maybe with a haptic, to make that really easy.

Another thing you can do is rotate objects, by putting two fingers on the display, and twisting to rotate.

And, that can be really great, but with all of these two finger gestures, another thing you should think about is your tap targets.

Because things in AR are always moving as you’re moving the device, and they can get really small if you get further away, or if you scale them down.

You should make sure to use really generous tap targets.

So, it’s easy to land two fingers on the object.

And, be sure to use the center of those two finger touch positions to figure out which object to interact with.

Because you might not be able to land two whole fingers, even on a generous tap target in AR.

Alright, so direct manipulation is really great in AR.

It’s really important because AR is so physical.

But, it’s also not quite enough for most AR apps.

Because if you have a lot of objects, it can make it hard to touch the right one.

As I said before, the objects are always moving on the display as you’re looking at different places in the world.

And, they’re staying fixed to that place in the world.

So, it can be a little hard to aim for those objects on the screen.

But, the fundamental reason, the number one reason, the real reason that touch is not enough for AR apps, is that it’s fundamentally two-dimensional.

The surface of the display is two-dimensional, and you’re touching on that surface.

That is what made it so great, multi-touch so great for flat, 2D apps in iOS.

But, AR content, it’s placed into the world.

It’s part of the world.

So, that means we need some way to interact with that content also in three dimensions.

Because the world is three-dimensional.

The answer there is moving your device.

Because moving your device, it’s natively three-dimensional.

It’s three-dimensional inherently.

You can move up and down.

You can move left and right.

You can move forward and back.

You can turn in all directions.

You can stand up and move across the room if you want to go a little bit further.

But, the important part is that moving your device is fully three-dimensional, and that really makes it the number one, the primary interaction in AR.

In fact, I would say that moving your device is more important than touch for AR apps.

In fact, it’s so natural, it’s built in to every AR app by default.

The way that you look at different things in AR is by moving your device to look at it from different angles, and different positions.

So, it’s really natural, and it’s also really powerful.

And, moving your device, it accomplishes many of the things that you would have done in a 2D app by using multi-touch.

So, in a 2D app, if you want to see more content, the way you do that is by scrolling on the display.

You scroll down to see something new.

And, that’s great in a 2D app, but in AR, you do that in 3D.

You see different content, by moving your device to look at the content from different positions, and from different angles.

So, it solves that same problem of wanting to see more, but it solves it fully in three dimensions.

Similarly, in a traditional 2D app, if you want to see something bigger, you pinch in on it to make it bigger.

You pinch to zoom.

If you want to see it smaller, you pinch out.

You pinch to zoom out.

But, in AR, if you want to see something bigger, you can just get closer to the thing you’re looking at.

You can just move closer in.

And, if you want to see more things at once, you want to see things from a wider angle, you can just take a step back, and look at all of the content at once.

So, moving your device, it also replaces what you might have done by pinch to zoom to see more content in a 2D application.

So, movement is really great.

It can replace some of those things you might have used multi-touch for.

You can also use it to build totally custom interactions for your AR apps.

And, those can be really natural.

Just like using your device to look around at different things, or to get closer and further away.

So, let’s take a look at an example of that.

This is Swiftshot, the multiplayer AR sample game that you might have seen in the Keynote, or you might have actually tried it.

In order to fire a slingshot in Swiftshot, the way you do it is you move close to that slingshot.

You don’t have to pick which slingshot you’re going to fire from a list.

You don’t have to reach out and try and aim for it on the screen.

You just move close to it, and when you want to fire the slingshot, you just pull back and release.

So, it’s incredibly precise, because you can fire in three dimensions.

You have three dimensions of precision while you’re moving that slingshot, while you’re pulling it back.

And, that’s more than you could ever do using touch.

So, moving your device, it’s not only really natural, but it’s also more precise than you could ever do using touch in AR apps.

So, moving your device is great, but sometimes we need to combine the best of touch and the best of device movement to create the absolute best interactions.

So, let’s look at some examples of that.

And, let’s start with combining direct manipulation with moving your device.

So, Quicklook in AR also offers a great example of combining device movement and direct manipulation in AR.

If you want to move an object, of course, you can just drag it to the new position, the same as you saw before.

Touch on the screen and move it to a new place.

But, you could also touch down to pick up the object and then turn the device and release in a new place.

And, that’s great, because it gives you the full three-dimensional control of moving your device to pick where you’re going to place it.

And, it lets you move objects to places that you can’t see on the screen, by picking them up and turning.

But, it still keeps that sense of physical interaction.

That sense of direct manipulation that you had from picking up the object directly.

So, if you support any kind of moving objects in AR in your apps, you should definitely support picking up an object with direct manipulation, and moving the device to find a new place to put it.

Alright. So, moving your device and direct manipulation is one way to combine touch.

But another way to combine touch with moving your device is through indirect controls.

And, what I mean by indirect controls, are controls that are flat on the display.

They’re not placed in the world.

They’re not attached to anything.

They’re in a consistent position on the display, and you can learn that position.

And, that’s really great, because that means they get out of the way.

Once you’ve learned that position of that control on the display, it stays in one place.

You can rest your finger above it while you’re speeding your time focusing on the rest of the app.

So, let’s look at an example of that.

Here’s Zombie Gunship AR.

You’re flying in a gunship above a horde of zombies, and you really need to aim at them.

You want to focus on aiming.

You don’t want to focus on moving your finger, trying to figure out where you need to aim your finger on the display in order to fire.

Instead, you can just rest your finger above the fire button, and spend all of your time using that full 3D precision of moving your device in order to aim where it is you’re firing, to aim at those zombies.

So, indirect controls are really great combined with moving your device.

But, they’re also really great because they let apps work one-handed.

In AR, no matter what, you always have to use one hand to control where it is that you’re looking, at least one hand.

So, if you want to build a one-handed AR experience, you’re going to need to use really reachable controls that are really easy to access.

So, an example of that is Measure.

Measure uses an indirect control, the plus button, the add button at the bottom of the screen to add points.

And, that control is in a really reachable spot.

So, even while you’re using one hand to focus the reticle on the center of the screen, to precisely place your measurements, you can keep a finger placed above that plus button to easily add your measurements, to easily place those points.

So, using indirect controls can be a great way to make it not only easy to use AR experiences, but one-handed ones as well.

Alright. So, that’s how to pick interactions for AR.

You can use direct manipulation to give that sense of physicality, and physical interaction.

You can move the device, the primary interaction in AR.

And, you can use really reachable indirect controls to focus on the content, rather than on controls or buttons interacting with it.

And, that’s what I wanted to talk to you about today.

Getting into AR, guiding people down the right path, and giving them that direct feedback that they’re doing the right thing.

The ways you can present your content in 2D and VR, and how you can show content to make it easy to read, and give objects that physical identity transitioning in and out of AR.

And, the interactions that you can use in the world, especially and primarily moving your device.

So, now I want to bring up Omar, who’s going to talk about making your models, your 3D models look really great in AR.

Thank you.

[ Applause ]

Thanks, Grant.

Hey, everyone.

I’m really excited to be up here to talk about some best practices that you need to keep in mind while you’re developing your content for your AR experiences.

So, we have a lot of different pieces of information to cover today.

And, whether you’re an engineer, designer, manager, or an artist, we wanted to provide you with a utility belt of tactics and definitions, so that you are best equipped to craft your own exceptional AR contents to delight people with.

So, to begin with, let’s start with some essential concepts to keep in mind while you’re developing your AR experience.

AR is incredible.

Having the ability to take anything you can imagine, and place it into the real world is absolutely magical.

And, because of this, people tend to expect a lot out of their AR experiences.

They expect your 3D content to render at a smooth and consistent rate.

It’s incredibly distracting when you’re really engaged with the content, and you start to move a bit closer to it, and then, you want to appreciate those fine details, and all of a sudden, wham!

Poor optimization has caused performance to tank, and now it’s as if you’re watching a slideshow.

So, to ensure your the most smoothest performance at all times, and to keep people fully engaged with your AR scene, your app needs to render at a recommended target of 60 frames per second.

Now, it’s really important to maintain this target throughout the entire experience.

Really stress test your content.

View it from every possible angle.

You know, move in close, move back from it, and just make sure that the performance does not ever degrade.

You know, maybe someday in the future, batteries will run for days.

But, today, make sure that your experience is able to have as minimal of an impact to battery life as possible.

Don’t give people the chance to blame your app for draining their battery.

The more power you save, the more you want people to come back to your experience and try it again.

I mean, I don’t know about you, but whenever I see a battery indicator in this stake, I seriously feel like I’m going to have a panic attack.

And, we definitely don’t want to have our AR experiences cause widespread battery chaos throughout the lands.

Remember, only you can prevent excessive battery drain.

I like to look at AR as having the power to take anything you can imagine and to transport into the real world.

People want to explore your content, so you definitely want to bring your A game.

Take the time to craft those nuanced details into your 3D content.

Build a cohesive story and style.

And, remember that every little detail, every little touch is an opportunity to surprise people.

So, let’s say we wanted to make an AR experience to be about an aquarium.

Even in its most abstract form, I think I would be really hard-pressed to find anybody who would believe that this marshmallow blob represents a fish.

On a positive note, though, our app will pretty much rock the performance numbers if this little guy’s flopping around.

So, let’s try that again.

Ah, now this is much better.

That is one properly dead-looking fish.

See how it exhibits some nice details, that once it’s running in AR will really want to entice people to move closer to it to see all those nuanced details and explore its features.

We should strive to maintain this level of quality with all the fishes swimming in our aquarium, or just floating in the top in this case.

Finally, it’s important to remember that people really want to use your app in a wide variety of environments.

You want to avoid having your content stand out in real world locations where potentially the lighting ambient conditions could conflict with the story that you’re trying to tell.

So, when working with your assets, try to avoid using colors that are either too bright or too dark.

And, make sure that you light your AR scene in a way that it casts even lighting on all the objects that you’re planning to render, and at no matter what angle you view them from.

You want your AR content to work whether it’s day or night.

Now, spoiler alert, we’re going to go over some really nice features in ARKit that will enable you to really delight people when they see your AR content blend and react seamlessly to the real world environment.

So, as you’re building your AR content, one great tool to help evaluate your progress is by using one of our recent announced iOS 12 features, AR Quicklook.

Throw some of your assets up on iCloud drive, view them using the Files app on iOS, and quickly have the ability to project it into AR.

Heck, you can even show off your masterpiece by throwing it onto a website so that your friends can look it up, and you can view it anywhere, right from Safari.

It’s pretty awesome.

Definitely go back and check out the session with Dave and David, who go over details of best practices of how you use AR Quicklook, and I’m guarantee you, it’ll probably change your life with how you develop your assets.

So, now you’ve taken some time to consider what people expect from our AR experience, let’s quickly plan out what type of app we’re going to build today.

You know, it’s always a good idea to start thinking this through before you begin creating your 3D content.

As knowing what you want to make will help narrow down how best to optimize your content and your assets for AR.

So, you’re sitting at your desk, and suddenly you’re struck by inspiration.

You just came up with the most absolute brilliant AR experience ever.

Alright? So, let’s step back and ask ourselves a couple questions first.

Does this experience really need to render hundreds of AR objects, or will we focus on a single hero asset?

How much detail do we actually want?

And, what graphical style best represents what we’re trying to convey?

Have we really paid attention to Grant earlier and are thinking about the level of interaction we want people to have with our experience?

Having a clear answer to questions like these will help determine where best to put your rendering budget when you’re developing your app.

For example, imagine you’re building an AR experience similar to the IKEA Place app, where people can preview different pieces of furniture by being able to place them in their home, or in our case, outside on their patio.

Now, the stars of the show are actually the different furniture pieces, so you need to present highly detailed objects that closely mirror the real world counterparts.

In this case, it is a good idea to make sure that you spend a little extra time in your rendering budget on these single hero assets, because their level of quality can essentially make or break a sale.

On the other hand, you decide that you’ve had enough of accidentally stepping on those little, tiny, plastic bricks of agonizing pain that your kids start laying around the house.

So, in order to preserve your sanity, you build an AR experience so they can play around with as many of these blocks as they can possibly imagine, similar to Playgrounds AR.

And, save yourself from ever having to experience brick pain again.

In an app like this, where you potentially have a lot of objects being rendered and interacted with, you want to basically create very simple, low-poly models with a flat colorful material, so that you can have a ton of them onscreen, and still have really good performance.

So, now that we’ve asked ourselves these important questions, it’s time to set up our AR canvas.

Just like painters like to set up their canvas in space before they begin work, we would like to have a couple suggestions of how to set up your project up front, in order to position yourself for optimal success.

We are big fans of creating a focus square to determine where to start placing your AR content.

And, if you’re using SceneKit, right there on the bottom of the screen, you have the option to actually activate the statistics panel.

This will allow you to see your current frames per second, as well as how many polygons there are visible on the screen at any given time.

Which should be very helpful as you start to build out your app and put all the different elements into it.

So, now that we have a starter scene up and running, I was thinking, what will be a good example AR app to help get over these best practices?

So, I’m not really an outdoorsy guy, but coming to California, I find that here a lot of people are.

So, I’ve been trying to connect with nature.

You know, go camping, maybe make a campfire for once, and yet it really didn’t happen.

So, instead I thought, let’s just throw it into an app, and see how it goes.

And we’re going to call it CampfiAR.

I know, it’s perfect, right?

Now, we can work on building out a detailed, single object, and bring all the joys of being outdoors without the fear of any of those bugs or fresh air.

We decided to render with a stylized, semi-realistic, and playful graphical style.

And, apply unique details to the use of careful application of some key, physically-based material properties.

These choices mean that we could potentially use a lot of polygons to render the content on the screen, but why deprive people of the ability to spend multiple hours staring at our beautiful campfire?

Let’s avoid going down that route, and work towards optimizing our scene by using a few tricks of the trade.

So, we’ll begin by focusing on the foundational structure of 3D objects, the mesh.

And describe the typical development flow that will allow you to create highly detailed models, but still maintain a low poly count for all the models in your scene.

And, for those of you who might not know, poly count is essentially the number of polygons, typically triangles, that a mesh is composed of.

So, one of the first things we like to do, is lay out the basic structure of an AR scene by using these simple meshes.

We find using this type of white-boxing technique to be really helpful for testing out some basic interactions, as well as seeing how well the objects fit into the real world, what kind of scale are they at?

You know, actually, I think this campfire looks really great.

I think we’re going to call it a day here.

Let’s just call this done and ship it.

Thanks everyone.

I’m off to the afterparty, and wait.

So, that didn’t look like a campfire to you guys?

Oh, alright.

Sorry. Really?

My bad. Let’s jump back to actually building out this campfire mesh.

I want to give a quick high/low review of what a mesh is, and the basic data structures that comprise it.

So, now you can think of a mesh as being a collection of triangles that are arranged in 3D space that will form a surface for you to apply materials on.

And, the corners of these triangles are actually made up of points called vertices, which hold different pieces of information, such as its position in space, UV coordinates for texture application, as well as a very important property that we’ll go over later called the normals.

Well, since I got busted for trying to ship early, I wanted to redeem myself by building out one of the most gorgeous campfires in the world.

Take a look at the details of this camp.

Look at that fish and those branches.

You can see all the intricate details of the scales, as well as the etchings found on the bark.

But, man, my performance has tanked.

And, that poly count has gone up to almost about a million polygons for this screen.

So, I’m already in hot water, and I don’t want to get in any more trouble, so we better go back and fix this.

As I’m concerned about the impact that this will have on battery life, as well as how people are able to perceive and interact with this AR scene.

So, let’s see what we can do to, kind of, help reduce the number of polygons here.

Most 3D authoring tools have specific functions that make it easy for you to reduce the actual complexity of your models.

Here, we’ve reduced the number of polygons associated with the high-density model of the fish.

But, notice that, as we zoom in, many of the details have been lost.

But, don’t fret.

We can use certain material properties later on that will bring back a lot of those missing details.

The key here is to build a solid foundational mesh that uses a minimal amount of polygons.

So, let’s put aside the high-density mesh for now.

And, let’s focus on building up this low-density mesh as we move forward.

Alright, so I’ll admit that this isn’t looking quite as good as before, but man, you can take a look at that performance.

Not only have you saved a crazy amount of overhead by reducing the number of polygons found on the screen, but we’re able to add a bunch of 3D objects to this scene to make it even more robust.

And, if you recall, in our previous high-density mesh, we were running at about 30 frames per second.

But, now we’re back to 60 frames.

And, we were close to about, I think about a million polygons, and now it’s down to 9,000.

This is incredible.

Because of this, we are well on our way to having those performance specification that we desire.

A really solid frame rate with minimal impact to the battery life.

So, now that we have this optimized model in our campfire scene, let’s see how we can bring back some of those details that we lost by working on what some of these different material properties and techniques that will ensure that our model looks as good as possible while still maintaining that great level of performance.

So, you may have heard this time before, physically-based rendering thrown about regarding modern 3D rendering.

It’s a pretty complex topic that will take a lot more time than we actually have in this session to go over.

But, the basic concept describes the ability to take the application of your different material properties onto your mesh, in order to have it react realistically to the simulated lights found in your AR scene.

And, moving forward, all the materials that we’ll be discussing will conform to this shading technique.

If you want further details about this concept, there’s a great WWDC 2016 talk that goes over details about physically-based rendering as it applies to SceneKit, called Advances in SceneKit Rendering.

Now, with that said, let’s start talking about our first material property, albedo, or what is sometimes lovingly referred to as the base or diffuse color property.

So, let’s jump back into CampfiAR.

Previously our base measure’s looking a little bit boring, with the gray material associated with it, but after you apply albedo, you start to see that it looks a lot better.

But, the campfire is still missing a lot of those small, finite details that were originally found in that high-density mesh.

As you move closer to the campfire, you’ll notice that all the surfaces are relatively flat.

And, this is something that we’ll definitely correct later, but first let’s dive into the albedo property a little bit more.

Think of the albedo as basically being the base mesh of the various objects in your AR scene.

This is the material property that you typically use to apply textures to the surface of your model.

If you recall, your mesh contained different vertices, that held different pieces of information.

The ones you see here are actually called UV coordinates, which help map out how pixels from various texture maps are actually applied to the model.

And, after we’ve worked with these textures, we’ve applied the abel material to this property on the fish.

So, now we’ve essentially applied a texture to our fish, I want to remind you about the fact that you never know where people are never going to experience your app.

You want to be able to have your content fit in as many different scenarios as possible.

So, you want to take care in selecting the right albedo value that are neither too bright or dark.

As, you want this to work in a wide variety of different situations.

So, our fish has got a skin, but we’re still missing a lot of details from this, and the other objects found in the scene.

So let’s go back into how we can start bringing back a lot of those details through the use of the normal material property.

So, as we jump back into CampfiAR, let’s see how we can bring back some of those details that we moved from optimization.

This can be done through the use of a special texture called a normal map, which you can see here as the blue tinted maps that are now applied to our AR scene.

These maps allow you to add those fine surface details back into your models, without the need to add any additional geometry.

Now, after applying the normal maps, you can see how the fish exhibits some scales, as well as making the branches show just a tiny bit more detail.

Also if you take a look at the statistics panel, you’ll notice that there was absolutely no change in the number of polygons related to this model.

It’s magical, isn’t it?

So, how do we make this normal map?

Let’s see a closer look at one of the branches, and see what we can do to make this work.

In most modern 3D modeling applications, artists have the ability to generate these normal maps by projecting details from a high-density mesh over to a low-density one.

So, here you can see what the normal map looks like on the branches after they’ve been generated from this high-density mesh.

And, after applying the normal map, you start to notice all those nice details that were originally lost come back into this model.

But we still are able to maintain that high performance, of the low-poly mesh.

So, you may be wondering why the normal map looks kind of strange, well the colors of the normal map actually represent visual representations of the vector data.

And, determine how the normals on the surface model will be offset in order to change the way that light is being reflected, and our key to making this effect work.

That was a bit of a mouthful, so let’s dive a little bit more into this property, because normals we feel are a really important topic.

And, we want to spend a couple more minutes just going into how truly spectacular these can be.

The art of manipulating normal vectors are one of the key tools that AR creators have in order to add a lot of the significant details back into their model.

So, what the heck is a normal vector, and are there strange vectors as well?

Well, no there’s no strange vectors unless you forgot your high school trig, but normal vectors lie perpendicular to the surface of the mesh, and are associated with each mesh vertex.

So, why do we need these normals?

Well, in order to see our object, you need to place simulated lights into your 3D engine.

Normal vectors allow 3D engines to calculate how these lights are actually reflected off the surface of these materials.

Similar to how light behaves in the real world, and are essential to making sure that your AR scenes mimic reality.

What’s interesting is that by modifying these normals, you can trick the engine to thinking that your surface is actually more detailed than it really is, without need to add any additional geometry.

If you take a look at this example, you can see a simple sphere, being rendered with flat shading.

What this means, that the normals associated with each face of the mesh are pointing in the exact same direction, as seen by the 2D diagram.

Now, when you when light reacts to this surface, you’ll be actually able to notice all the different polygons comprising this mesh due to being evenly lit across each face.

Here, though, we’re using the exact same model, but leveraging a technique called smooth or phong shading.

Notice that the normals are actually gradually changing as you move across the surface of the polygon.

With the engine calculates the reflection off of this model, it’ll give the impression of a smooth, curved surface due to the gradual interpolation of these normals.

What’s really awesome is that these two models have the exact same number of polygons associated with them.

But, through this normal manipulation, the object will seem to have a much more smoother and detailed surface, without the need to, again, add or change any of the geometry associated with this mesh.

Whew. Alright, so we’ve gone through enough about normals.

Let’s, kind of, go over what it takes to add a little bit more shiny to your scene.

CampfiAR is definitely starting to look better.

But, some of these normal maps [inaudible] but some of these parts seem a bit dull, especially the objects that you expect to be shiny or reflective, kind of like the kettle or the fish in this scene.

What we see here is the results of applying a metal map to our AR scene.

A metal map is used to determine which object surfaces should exhibit reflective properties.

Once the material property’s activated, notice how shiny and reflective the area’s we’ve designated as being metallic are.

And, on the kettle as well as the scales of the fish.

So, let’s focus specifically on the kettle.

We’ll begin by taking the original albedo map, and then apply a metal map to the metalness property of this material.

After the application of the metal map, the 3D render will actually designate the surface to be reflective in the areas on the map that were actually white.

And, despite begin called metalness, it doesn’t necessarily mean that your object needs to contain metal.

Rather, we’re just letting the 3D engine know that this object should exhibit, kind of, a reflective surface.

Now, it’s best to use the metalness map on your model, like our kettle here, when it has both a mixture of metallic and non-metallic surfaces.

It’s a simple grayscale map, where the level of metalness ranges from being black, for non-metal surfaces, to white, for metallic surfaces.

And, it allows for a single material to be used to represent both reflective and non-reflective surfaces on your single object.

However, this kettle’s a bit crazy reflective.

And, doesn’t really quite provide the look that we’re hoping for.

So, in this case, we want to potentially vary the amount of reflectivity, as well as simulate the fact that all surfaces are not perfectly smooth.

And, they actually might exhibit slight small, microabrasions to their surface.

And, this is where the roughness material property comes into play.

So, returning back to CampfiAR, you can see how the reflective surfaces are a bit too smooth.

As we layer on the roughness maps, you can see that we are going to modify both the kettle and the fish to adjust the way that they’re reflecting.

And then, after we apply the roughness material property to both of these objects, you can clearly see the reduction of the reflectivity.

This combination of roughness and metalness properties are another important concept to focus on.

So, let’s dive a little bit deeper into the roughness material property.

Use roughness to simulate micro surface details, which in turn will affect the way light is being bounced off that surface.

If the roughness property is set to completely smooth, then light will bounce off of your surface like a mirror.

As you increase the roughness of the material, light reflects over a wider range of angles.

Here, we’re slowly scaling a constant roughness value between no roughness to max roughness on the kettle itself.

And, this is a good way of being able to simulate the concept of having microsurfaces, and blurring reflection to the point where you might not see any reflectivity depending on which range you put your value to.

So, for the kettle, we’ve taken the original metal surface, and instead of just applying a constant roughness value to it, we actually applied a roughness map.

And, this will help designate the surfaces where we will be scattering the light more often, and more than others.

Once we’ve applied the roughness map, we get to see the real final look of the reflectiveness of this kettle, which is a lot less shiny as before.

Now, a combination of this metalness property and this roughness property really make your reflective AR models look phenomenal.

Roughness can be used to tweak how much of your objects will reflect the environment.

And, can really be used to add a lot more realism to your metallic surfaces.

You can use this roughness map to add additional minor details as we’ve done here, to make our kettle look a little more scuffed up.

Now, to close out materials, we have two more material properties that I want to discuss to further refine your models, and have a good balance between performance and aesthetics.

Ambient occlusion is a material property that is used to provide your model with self-shadowing, which in turn can lead to adding additional depth and details to your AR models.

Now, while normal maps are great for applying significant amount of details back into your AR model, you can use ambient occlusion to really hammer in those details.

Here, we’re visualizing the ambient occlusion maps for CampfiAR, and it’s a bit of a difficult effect to demonstrate, as it’s meant to be relatively subtle.

But, see if you can notice the additional shadows on the logs, as well as certain areas near the bottom of the kettle.

In our case here, it’s kind of like playing Where’s Waldo?

with shadows, so let’s focus in on the logs in the scene.

So, here we’re showing you the normal maped logs before.

Now, there’s some great details found in the ridges, but we can definitely improve some of these areas.

Now, as we look at the ambient occlusion map, you can see how we’ve added some regions of self-shadowing.

So, lower portions of the log, as well as around the small embedded stumps.

And, after we apply the map to our ambient occlusion property, I hope you can see the benefits of adding those baked detail shadows without the need of using expensive dynamic lights in your scene.

When working in AR, we recommend that you actually bake your ambient occlusion into a map.

Which is what we’ve done for CampfiAR.

Rather than use the alternative, such as screen space ambient occlusions, which is a camera-based post-process effect, and can potentially lead to poor rendering performance in your scene.

And, last but certainly not least, be frugal with the use of your transparency in your materials materials.

If you must use transparencies, we recommend that you use separate materials for objects where you see a combination of transparent and non-transparent surfaces.

In general, when working with AR content, the use of a lot of transparent surfaces can potentially have a huge impact on performance, especially if you are having transparent surfaces that are, kind of, stacked in front of each other when you’re viewing them.

This is known as overdraw, and it’s something that you definitely want to avoid when you’re working in AR.

Whew. Alright.

I hope everybody’s still with me, as that was quite a lot to go through.

So far, we’ve focused mostly on how the AR content will react to the simulated spotlights found in our 3D engines.

But now, it’s time to focus on some ways to make our content seem like it’s actually part of the real world.

A fantastic option to use to compensate for varied lighting conditions is to leverage one of ARKit’s well known features, and light estimation.

Let’s begin by activating this functionality and see how it affects our kettle.

Notice that how when the ambient light changes in intensity in the real world, a similar adjustment is made to the ambient light in our AR scene.

The way this works is that ARKit analyzes each frame of the video, and uses them to estimate the lighting condition of the real world.

It is an absolutely magical feature that helps assure that the amount to light applied to your AR content matches what you see in the real world.

Now, that we have magical light wizards living in our AR scene, let’s discuss shadows.

Shadows in AR are really, really hard to get right.

Your shadow needs to be able to work in a wide variety of situations.

Remember that people can be potentially using your app anywhere.

And, if your AR shadows differ from the ones that are seen in the real world environment, it can be relatively jarring for your experience.

Here, we have tried to incorrectly cast a dynamic shadow by using a directional light as a sharp angle in our 3D engine.

Shadows are a great way to make objects actually feel grounded in the real world, but in this case, it doesn’t really match the shadows being seen in the surrounding environment.

It’s like we’re purposely trying to defy the laws of physics here.

Instead, we suggest that you place your directional light directly overhead, and play a little bit around with the intensity of that effect to make sure that it actually feels a little bit more subtle.

This will allow your shadows to work in a lot more different situations, and a lot more different scenarios.

An alternative to this is actually using a method where you can create your own drop shadow, rather than using dynamic lights in your scene, which can get expensive, and can severely affect performance if you’re rendering a lot of 3D content.

Take the time to craft those really good shadows.

And ensure that they will fit in as many real world situations as possible.

Ah, environment maps.

If you really want to astound people, you definitely want to use these maps, especially on those AR objects that exhibit reflectivity.

It’ll make it seem like your AR content exists right here, in the real world.

And, to make it super easy to leverage their powers we want to show you how one of the new features in iOS 12 and ARKit 2.0, automatic environmental mapping, can help you achieve this effect.

If you take a close look at the kettle, originally we’re using a baked environment map that has a kind of blush tinge to it.

Once we’ve activated the automatic environmental mapping, notice how the kettle now reflects a lot of the ground, and the surrounding color of the current environment that we’re in.

You can even see a little bit of the green from the grass that this kettle’s sitting in.

This is a fantastic feature that’ll really help ground your objects into the real world, and with careful use of roughness, can really enhance the believability of your scene.

So, now why is automatic environmental mapping so actually incredible?

Typically, these maps are used to simulate the ability of metallic surfaces to mirror the environment around them.

You can see example of a cube environment map here.

Now, before ARKit added automatic environmental mapping, you had to provide your own image, and hope that it was generic enough to work in a large variety of situations that your app may be used in.

But now, with ARKit 2.0, you can kiss those sleepless night, fretting about whether your environment map is actually ruining your AR experience.

For more information about environmental mapping, and other new features found in ARKit 2.0, please check out the talk by Arsalan and Reinhard called What’s New in ARKit 2.

And now, to close out CampfiAR, let’s put all this together, and add the final touches to the campfire.

With a little bit of animations and [inaudible] effects, along with the applications of all the techniques discussed today, CampfiAR is ready for primetime.

Now, if I ever get the crazy urge to go outside, I can suppress those urges, and stay safely at my desk, enjoying the joys of simulated outdoors with CAmpfiAR.

Who needs EpiPens?

Not this guy.

We went through lot of different topics today.

So, I quickly want to reiterate some of the important things to keep in mind while you’re developing your app.

Remember that your app can be used in a wide variety of real world conditions, so always make sure aesthetic choices allow your content to blend almost anywhere.

Once you’ve decided what kind of AR experience you wish to build, adhere to what you think your rendering budget is, and work as many optimizations as you can to get that smooth performance and efficiently use your power.

And, finally leverage the use of various material properties, as well as the built-in features of ARKit to get your AR content looking great, in order to delight people who use your app.

And, for your reference, here’s a table of all the material properties we worked with today to build out CampfiAR.

You can get additional information at the link provided.

Thank you.

[ Applause ]

Apple, Inc. AAPL
1 Infinite Loop Cupertino CA 95014 US