Introducing RealityKit and Reality Composer 

Session 603 WWDC 2019

Architected for AR, RealityKit provides developers access to world-class capabilities for rendering, animation, physics, and spatial audio. See how RealityKit reimagines the traditional 3D engine to make AR development faster and easier for developers than ever before. Understand the building blocks of developing RealityKit based apps and games, and learn about prototyping and producing content for AR experiences with Reality Composer.

[ Music ]

[ Applause ]

Hi, everybody.

My name is Cody and I’d like to welcome you to the introduction to RealityKit and Reality Composer.

So today, the App Store is full of many different AR experiences that utilize the power of ARKit.

And developers of these apps have a lot of common needs, for example, rendering, physics and animation.

And ARKit makes building AR experiences pretty simple but now we’re going to make it even easier.

The developing applications for AR, provides very unique challenges that really don’t exist anywhere else, and it mostly centers around the fact that your virtual content will now interact with the real world and vice versa.

So if you put like a virtual lamp into a real living room, it should probably light up the surrounding objects whether they’re real or virtual.

And additionally, any content that gets placed in the real world should look like it belongs there.

And this can be very important in a case like online shopping, where you want to see how the product will look in your home.

So this gives us a very strong requirement for a very realistic render, otherwise, you’ll lose the illusion of augmented reality.

So enter RealityKit, which is a brand new Swift framework designed to help you build your AR applications and easily exploit the power of ARKit.

RealityKit is an AR first framework, which means that it’s been completely designed from the ground up with an emphasis on AR application development.

So the framework heavily emphasizes highly realistic physically-based rendering and accurate object simulation with the real life environment.

We also take full advantage of the power of Swift to deliver a high-quality framework that has a very simple API.

In addition to RealityKit, we’re also introducing Reality Composer, which is a Mac and iOS tool that enables simple AR-based content creation.

And its intuitive design is really targeted at anybody who wants to make their own content exist in the world around them, even allowing you to lay out your scenes directly in AR.

But before we get into that, let’s focus on RealityKit and see it in action.

So here we have a real living room with a couch and a table and some small objects that are on the table.

It’s purposely a bit blurry because the focus of the camera is very shallow here.

So with RealityKit, we can add virtual objects to this world and believe that they’re really in our environment.

So notice here how very accurate lighting and shadows and camera effects, really helped to make the object feel like it belongs, even something fantastical like this.

RealityKit helps with the heavy lifting and making your content fit in the world automatically.

All you have to do is tell the framework where you want to show up and then supply any custom logic that’s specific to your app.

And it’s really easy to get up and running with your very first app.

And in fact for that app you just saw, I only needed to write four lines of code.

So here, I’m placing a horizontal anchor in the world.

I’m loading a model that’s called flyer from my asset bundle.

And I’m attaching that model to the anchor.

So over the course of this talk, I’ll dive deeper into what each of these concepts mean.

But now that we’ve seen this framework in action, let’s go into some of the systems that make up RealityKit and the basics to help you get started right away.

So to help build your apps, RealityKit takes care of a lot for you to all of its built-in systems.

All of which are integrated with other Apple frameworks such as ARKit and Metal.

So first off, the rendering system has a job of making your content look amazing and realistic in a real environment.

And it does this with a physically-based shading system that accurately simulates lighting and material interactions.

It’s all built on top of Metal, it means that it’s highly optimized for Apple devices.

The system takes advantage of all the Metal has to offer, such as multithreaded rendering and other low level functionality.

And since RealityKit is designed for AR applications, the feature set of the render is entirely focused around making your content look great and in a real environment.

On top of rendering, animations really helped to breathe life to your content and really enrich your scene.

And RealityKit’s animation system achieves this through supporting both skeletal and transform animations, both of which can be imported directly from USDZ.

And you can even animate your objects procedurally through ARKit’s motion capture technology.

The physics system is responsible for simulating complex interactions between content, including real world objects.

And it provides a collision detection system that supports several different proxy shapes, such as box, sphere, or even compound shapes.

And it also simulates rigid body dynamics, such as mass, inertia, friction, and restitution.

RealityKit has built-in support for networking and it synchronizes the entire scene across devices, including shared representations of real world data.

And it’s all built on top of Apple’s multipeer networking library.

This system it works out of the box, and it really makes building connected apps simple and automatic.

RealityKit uses an entity component system to represent object data which gives users a really powerful tool to easily make content through composition of properties as opposed to large object inheritance hierarchies.

And you can even create your own custom components to add your own data and functionality to any entity.

And additionally, all components automatically synchronize their data with other devices in a network setting, even custom components.

So sharing data is a very easy operation.

So far, all the systems that we focused on are really emphasizing a lot of the visual side of AR, but that really doesn’t capture the whole picture.

Audio is also an important aspect in helping to create very immersive content that you can believe is in the real world.

RealityKit’s audio system understands 3D space and can place audio sources on dynamic content.

So this has the effect of making virtual objects further from you actually sound like they’re farther away, and vice versa.

I’d also like to mention that RealityKit has defined a new file type, known as the Reality File, which stores optimized content that’s ready for loading into your app.

And this file can include everything that your app would need.

For example, meshes and materials and physics properties or even audio sources.

RealityKit does also support importing directly from USDZ files, similar to how AR Quick Look works.

But using Reality Files will give you much faster uploading time and much greater control over your content.

And you can export these files directly from Reality Composer, which will be covered later in this talk.

So while using the RealityKit API, there’s going to be four main areas that you’re going to use no matter what type of application that you’re going to be building.

And those include ARView, anchors, scenes, and entities.

So let’s start with the View or ARView as it’s called in RealityKit.

So the View takes care of a lot of the heavy lifting of building your AR apps, so that you can start focusing on your experience right away.

And it comes with a lot of useful functionality for you.

For example, full gesture support so that you can add so that you can respond to any gesture on iOS devices, which allows any entity to easily respond to user input, as well as very realistic camera effects, which are powered by the render that really helped to integrate your virtual content into the real world.

And in fact, ARView will give you the exact same quality and feature set as AR Quick Look right out of the box.

So let’s take a look at some of these camera effects in action.

So one of the most important effects to help content feel like it’s part of the real world is very convincing shadows, grounding shadows.

So notice in this video how when the shadow isn’t present, it becomes difficult to determine where this robot is relative to the table underneath it.

Is it floating in space, is it sitting on these blocks, it’s really hard to tell.

But when the shadow is there, it becomes very clear where this robot is in 3D space.

And ARView provides two different grounding shadowing techniques which you can choose from to balance performance for your particular application.

So either a simple drop shadow, or the much more realistic [inaudible] shadow, which is what you see here.

ARView automatically reads the camera exposure time, which is provided by ARKit to perform camera-based motion blur on all virtual content in your scene, which helps to match the blur that’s already present in the live camera feed.

I hope I don’t make anybody sick.

OK. So notice how when motion blur is not enabled, this content feels like it’s just stuck on top of the video feed.

Cool. We built a realistic depth of field algorithm to model varying camera focus, which is again using information provided by ARKit.

So when the device camera focuses on a particular feature of the world, ARView will ensure that that virtual content follows the same focusing pattern.

So for this video, the focus of the camera is continually shifting back and forth to focus on each of these columns individually.

[ Applause ]

Finally ARView adds digital film grain to the virtual content provided by the new ARKit feature camera grain.

And since all digital cameras have some amount of noise and especially in really low light situations, adding it to your virtual content can really help users feel like the content is part of that world that they’re seeing and not just something that’s stuck on top of a camera image.

So notice here how when that robot doesn’t have any noise.

It doesn’t feel like it fits in the environment very well, whereas when the noise comes in, it really feels like it’s beneath the camera image and not just on top of it.

So next let’s talk about entities which form the main building block for any experience that you’re going to create.

All virtual content in your scene is an entity with different types of components to give it very specific functionality.

Any entity can be parented to any other entity, which helps to establish the structure of a scene and build out a transform hierarchy, so it’s easier to reason about objects within their local space.

So for example, if you have a virtual table and a virtual cup and you want that cup to sit on the table, you might parent that table entity to the cup entity so that they move together in space.

Next, let’s talk about anchoring and why it’s such an important concept in AR.

So in the real world, objects are often in motion.

And if we’re projecting virtual content into a real environment, adapting to that motion is crucial for a realistic experience.

For example, if you have content that’s anchored to an image such as like a magazine on your table, that content should stick to it regardless of how that magazine moves in the real world.

And RealityKit solves this problem by exposing ARKit’s anchors as first-class citizens of the API, supporting all of the anchoring types available such as plane, body, face, or camera.

And since many surfaces in the world can be anchors for content, we treat each of these anchors as a local root for entity hierarchies.

So to illustrate how anchors work in RealityKit, let’s consider that there are horizontal and vertical planes in the world that we want to attach content to such as tables or walls.

Each of these could be utilized as an anchor.

And all anchors can have a hierarchy of entities that are attached to them.

And it’s important to note that the entire hierarchy of entities will not be active until a matching anchor is spotted in the world by ARKit.

So for example, if you were to define a horizontal plane anchor and attach entities to it, you won’t actually see those entities in your world until ARKit has successfully recognized a horizontal plane.

So this prevents content from just floating in space until all the anchors have been found.

So at one time, new anchors can be introduced to your scene, for example, an image anchor, which maybe it’s being used to display a virtual frame around a real life photograph that you have on your wall.

So if you decide to move that photograph for some reason, say you decide you just want it on a different wall or it just needs to be a little bit to the left, the virtual frame will move right along with it.

And finally, let’s take a look at the makeup of an AR scene, any of which is going to follow what you see here.

So you start with the ARView, which is the entry point to your AR world and it contains a reference to your scene.

And a scene has different anchors which you can add manually as you saw before.

And every time you load a new entity, you can attach it to the anchor of your choosing, or to a previously loaded entity to form an entity hierarchy.

So in this example, we have two anchors that each form their own entity hierarchies.

So now given all the things we’ve just seen, let’s see a live demo of some of the stuff in action.

[ Applause ]

So what we’ll be doing here is we’ll be detecting this plane in front of me and we’ll be placing virtual content on this plane using the procedural mesh procedural mesh generation library in RealityKit.

So let’s bring them out there.

And I’ll also bring in some virtual toys that are going to interact with these boxes.

So there we go.

So notice that the physics system is handling the interactions between all the different objects.

The animation system is animating this airplane as it flies around me.

And notice how RealityKit is automatically handling the lighting, the shadowing, and all the different camera effects to try to make this fit in the environment as much as possible.

So this whole app is only a few lines of code and really just involves loading in the cubes, loading in the meshes, and telling RealityKit where I want them to be.

And that’s that.

[ Applause ]

So next I’d like to invite Tyler on the stage to dive deeper into RealityKit and talk more about how it works.

[ Applause ]

Thanks, Cody.

I’m Tyler Casella and I’ll be walking you through some of the features available in RealityKit and Reality Composer.

Let’s dive right into it by looking at what you’ll need to get started with your AR application.

So as Cody showed you earlier, RealityKit uses the entity component design pattern to build objects within the world.

Entities establish the structure of your scene and build out a transform hierarchy so that it’s easier for you to reason about objects within their local space.

Now if you’re not familiar with the entity component design pattern, it’s no problem, it’s actually pretty straightforward to use.

Entities themselves are actually comprised of many different pieces called components.

And components defined to be the specific behaviors and data that can be added to individual entities.

Now, unlike strictly inheritance-based patterns, using entities and components allows you to greater use of code and more flexibility.

It actually also provides huge performance benefits under the hood, both in terms of memory layout and multithreading.

So to help illustrate entity and components, let’s take a look at an example.

So let’s say for example we have a couple of object types that we commonly use within our application, like a ball, lamp, and camera.

Now there’s often many pieces of behavior that are shared across all of them.

Let’s take for example, anchoring.

All of these objects need to be anchored within the world, and so we can add an anchoring component to all of them.

Now since both the ball and the lamp has have a visual representation, we can add a model component to both of them but we omit it from the camera because it’s invisible.

And this is where the compositional aspect of entity component design starts to become really powerful.

Now, to allow objects to collide with each other, we can add a collision component to them, and then we continue this until we have the combination that exhibits the behavior that we’re looking for.

And you’ll notice that all of these objects behave in very unique ways.

But by structuring your code in this manner, you’re able to avoid duplicating code and reuse behavior.

So here we can see how entities look in code.

An entity itself doesn’t take any parameters to create, but once you have an entity, you can start adding components to it using the subscript operator.

And in the same manner, you’re able to remove and modify components on these entities.

This allows you to dynamically modify the behavior of an entity.

Now all entities also contain children on them and you can add a child using the addChild method.

And with this hierarchy, setting entities position, you’re defining its position relative to its parent.

Now if you’re interested in its position in the world, you can use the set position relative to method and define nil as that relative entity.

And this indicates you want world space.

Now we also understand developers just want to get started building their applications, so we provide a number of useful variations of entities that come preconfigured and ready to use.

These cover all of the common usages such as establishing anchoring, adding visual content to your scene or dynamic lights.

And by instantiating any of these entities, all of the necessary components come preconfigured and ready to use.

So let’s take a look at some of these entities and how you can use them.

The first of which is AnchorEntity.

And this is our glue to the real world and often the first thing that you’re going to be establishing within your AR application.

AnchorEntity With AnchorEntity, you specify what it should be anchored to within the real world.

And as that object moves in your environment, the AnchorEntity is going to stay attached to it.

And because of this, the AnchorEntity is often going to be the root of your experience and then additional content is built on top of it.

AnchorEntity supports all of the anchor types available within ARKit.

This allows you to quickly get your content into the real world.

For instance, you can specify an image or an object that your content should be glued to, and then upon detecting that anchor that matches your specification, it’ll automatically become attached and appear in the world.

And if you already have an AR anchor or an AR raycast result, you can anchor to that directly as well.

So let’s take a look at how you can create anchor entities in code.

Now when you’re creating anchor entities, you’re describing what you would like to be anchored to.

So here I’m describing, I would like this AnchorEntity to be attached to a horizontal plane that has a classification of table and has a minimum bounds of half a meter by half a meter.

Next, we add that anchor to the ARView scene and once added to the scene, the anchor doesn’t necessarily immediately become active.

Remember, if ARKit hasn’t detected an anchor that matches that specification, it will remain inactive.

And then once we do detect an anchor that matches it, both the AnchorEntity and all of its sub-entities will become active.

Now although AnchorEntity is often the root entity of your experience, there are situations in which you’re going to want multiple of them.

Take this for example where I have content that I would like to be anchored to a table, but also separate content, I would like to be anchored to an image.

These all exist within the same scene but I have two different real-world objects that content is being glued to.

Now once you’ve established what you want your content to be attached to, the next is actually getting content in there.

And that’s where ModelEntity comes into the picture.

And this is really the workhorse of RealityKit.

This is what you’re going to find yourself using very often.

ModelEntity comes with all the building blocks that you need to integrate physics, animation, and rendering in the virtual world.

In entity component terms, a ModelEntity is an entity with a model component, a physics component, and a collision component.

And these entities can be created dynamically in code or you can load them directly from USDZ or Reality File.

So here we’re loading a ModelEntity from a file and you’ll notice that we make the load very explicit with the loadModel call.

This is so that you’re always aware when you’re doing something that’s potentially heavyweight and could be blocking the render thread.

And in all of these cases we provide you asynchronous variants to address that, which we’ll cover in later in a later session.

Now once your model is loaded, you simply attach it to your anchor and you’re good to go.

As soon as that anchor is detected, your model will show up in the world and remain attached.

Now let’s take a look at the actual anatomy of a ModelEntity.

The first item every ModelEntity contains is a mesh resource, which provides the geometric representation of the model.

Mesh resources can either be generated directly as a primitive or the result of loading from a USDZ or Reality File.

And because meshes are often heavyweight, they can be shared across multiple entities and this also allows us to optimize our rendering to match draw calls together under the hood.

Now if you’re not getting your mesh from a file, you can also generate them directly from a concise set of primitives.

This includes boxes, spheres, planes, and texts which supports all of the fonts on our platform.

Now although mesh has defined the geometric structure of a model, you still need a way to define how a model should look.

And this is where materials come into play.

Materials provide the look and feel of an object and how it interacts with the light around it.

Now RealityKit provides rendering that’s physically based, which means we simulate how light actually behaves in the real world, to help ensure that it’s going to blend seamlessly within it.

And materials customize how an object participates within the simulation.

Materials can either be predefined when you load a ModelEntity or you can create a material yourself with some of the material types that we provide in RealityKit which we can take a look at.

The first of these is SimpleMaterial.

Now typically a physically-based material has a lot of parameters, and that’s because lighting in the real world is really complex.

Now SimpleMaterial condenses these into a concise set of parameters that take either scalar inputs or a texture if you want to vary an input across a surface.

Simple material allows you to create a glossy plastic or a brushed metal or even a tinted glass.

And you can do this with the three main properties, basecolor, roughness, and metallic.

As the name implies, basecolor establishes the overall color of the material, allowing you to change an object from red to blue for example.

And a metallic parameter itself actually simulates how conductive that material is and affects how the light interacts with that surface.

So, having a metallic value of zero indicates that it’s a poor conductor.

And this actually means that light tends to transmit into that surface making the color of the subsurface dominate the appearance.

Now if we up that parameter to 1.0, this indicates it’s a good conductor which causes an immediate reemission of light at the surface level.

And this is what gives it that metallic look.

And the roughness parameter affects that perceived roughness of the object and how reflective the surface appears.

Now we also have unlit material, which also provides coloring of an object, but unlike SimpleMaterial, it does not participate within the physically-based rendering and is unaffected by the lights in the scene.

So this gives it an overall flat coloring.

This is often useful for text or any content that should remain bright, even when the environment is dark.

So here we can see an example of that in usage.

Here I placed a virtual television within the environment and I’ve used SimpleMaterial for the TV box, but unlit material for the actual screen.

And you’ll see as I turn off the lights, the TV dims but the screen remains bright, and this is because it’s unaffected by the environment’s lighting.

Both SimpleMaterial and unlit material allow for a wide range of appearances but RealityKit also provides a third material that behaves very different but it’s just as important and that’s OcclusionMaterial.

Now instead of affecting in object’s appearance, OcclusionMaterial instead masks any objects behind it and falls back to the camera pass through.

This gives the appearance of objects moving behind real-world objects.

So here we can see common usage in which I’ve created a plane on top of the real-world table and then I’ve applied that OcclusionMaterial to it.

This gives the illusion that the object is coming up out of the table.

And you may have noticed this during Cody’s demo as well.

Now if you don’t do this, it can often be difficult to gauge the position of an object and can really break that immersive experience for the user.

So these cover basic building blocks of ModelEntity, and we can take a quick look of how you can use mesh resource and materials in code.

Here we’re generating a box mesh and all of our calls to dynamically generate mesh resources, use the generate prefix convention.

And this is again to make sure it’s clear that work is being done under the hood.

Now since materials are lightweight and fast to initialize, they can be created directly from their initializer.

And once you have both your mesh and your material, you simply create a ModelEntity and pass these along.

And with this, you have your basic building block from which you can begin building the visual content within your AR experience.

But if you really want to bring it to life, you’re going to want to add animation as well.

And as Cody showed you, RealityKit supports two primary forms of animation, skeletal and transform animations.

These can both be loaded from file but transform animations can also be created directly within RealityKit.

Both types of animations can be played from the playAnimation method on all entities.

And when you play an animation, you’re given back an animation controller object.

This controller allows you to control the playback of the animation.

So with it, you can pause a playing animation to maintain its current timing, or you can check the current state of the animation and resume it, or even stop the animation.

Now if you don’t have any animations baked into your assets directly, you can still animate your entities with the move to function.

With this, you first provide the transform that you’d like to animate to, and here we’re moving 5 meters forward.

And then you optionally define the relative space that transform is in.

Here it’s in the world space, so I provide nil.

Next you provide the duration that you’d like the animation to take place, and then the easing function to define how smooth that animation should be from start to finish.

And with that, you’ve created a simple transform animation.

So by combining all of these together, RealityKit focuses on enabling you to build AR experiences.

And this year, in addition to a new framework, we’re also introducing a new tool to accompany your AR development workflow, and that’s Reality Composer.

We’re introducing Reality Composer for both macOS available in Xcode 11, in iPad and iPhone available in the App Store.

And with Reality Composer, we’re really focusing on helping iOS developers to easily create AR and 3D experiences.

We know that moving to 3D can be a really daunting task, and with Reality Composer we want you to feel right at home building out your AR scenes.

There’s a number of features that we’ve integrated into the tool to help you with this, which we can dive into.

Now one of the first things that you tackle when you’re starting a new project is often simply just getting stuff into the world.

And we wanted to make sure this was easy, by already providing you a set of content that you can either directly integrate into your scene or you can use as a placeholder until you land your final content.

This library ranges from simple shapes to common objects to even large buildings.

And these aren’t static assets either.

Here we have a spiral shape, which provides a number of parameters that give you a flexible range of creativity.

Here I’m able to take it and extrude it and compress it and then apply an aluminum material to it to really give it the look and feel of a metal spring.

And we provide rich models as well that you can integrate your own content into directly, such as this picture frame.

Here I’m able to pick a photo and then place it within it and then pick and choose a different frame that matches the style of the experience that I’m building.

Reality Composer also gives you a “what you see is what you get” environment, making it easy for you to lay out and pre-visualize your scenes, as you’re building it.

Here I’m able to duplicate the row of pawns and swap out their appearance to build out the other side of the chessboard.

And a big part of pre-visualization is actually seeing your content in AR so that you can ensure that it’s correctly scaled and fits well within that real-world environment.

Now reality composer also allows you to add behavior into your scene and provide simple interactions that help bring that AR experience to life.

So let’s walk through a quick example.

Here I’ve got a chessboard that I’d like to make an interaction in, where I’d like to make it so that when I tap the black rook at the top, it will move over and put the king into check mate.

So my first step is to open up the behavior panel and add a new behavior.

Now there’s a number of predefined behaviors provided, so I’ll go ahead and pick tap to flip.

Now, this gives me a tap trigger, which will cause a sequence of actions to play, which currently plays an emphasis animation on my rook.

For my interaction, I’m going to modify the motion type to instead be a jiggle instead of a flip.

Next, I’ll add a move by action to move that rook into place and I’ll adjust my movement just slightly.

And now we’re good to go and I can preview my interaction.

Now when I tap on my rook, it does a little jiggle, and then it moves over to put the king into checkmate.

[ Applause ]

So with all of these features, we really wanted it to be easy for you to make great looking content ready to view and experience in AR.

And all of these features that I’ve shown you here on the iPad are also available on macOS and iPhone.

Now although Reality Composer runs as a separate application, it’s tightly integrated within Xcode’s development and ecosystem.

When building your Xcode project, the content within your project is integrated and optimized into your application.

But there’s some magic that happens in between building your Xcode project and producing your application.

And that’s code generation.

As part of this step, the content from your project is compiled and optimized into a Reality File.

And next we introspect that file and generate customized code for accessing objects that you’ve created within your scene.

Now this code generation step is performed automatically behind the scenes by Xcode and establishes a strongly typed access to the content within your scene.

And this has the wonderful advantage that if there’s any mismatches between the naming of the objects in your scene, you get a compilation error instead of a runtime crash.

Code generation also gives you direct access to invoke triggers within the behaviors of your scene and create custom actions within your code.

We feel that with this the experience of building content in Reality Composer and coding in Xcode is seamless.

And that’s a quick overview of Reality Composer.

Now there’s a ton more I’d love to show you, because we’ve really only covered the tip of the iceberg today.

There’s many other areas that I haven’t tackled like networking, physics bodies, ray casting, gestures and more, so I encourage you to come to some of our other sessions.

Tomorrow at 10:00 a.m., we have the Building Apps with Reality Composer session, which is going to showcase how to build an AR application which is multiplayer in RealityKit.

And Thursday at 11:00 a.m., we have the building AR Experiences with Reality Composer session, which explores how you can integrate Reality Composer into your AR application workflow.

And please come join us in the labs tomorrow at noon.

We’re happy to answer any questions you may have and help get you started.

So thank you for coming.

And I hope you have a great WWDC.

[ Applause ]

Apple, Inc. AAPL
1 Infinite Loop Cupertino CA 95014 US