SceneKit in Swift Playgrounds 

Session 605 WWDC 2017

Discover tips and tricks gleaned by the Swift Playgrounds Content team for working more effectively with SceneKit on a visually rich app. Learn how to integrate animation, optimize rendering performance, design for accessibility, add visual polish, and understand strategies for creating an effective workflow with 3D assets.

[ Applause ]

Thank you.

Good morning everybody, and welcome to SceneKit in Swift Playgrounds.

My name’s Michael DeWitt and I’m really excited today to give you an inside look into how we use SceneKit to make the Learn to Code content.

So hopefully many of you are already familiar with Swift Playgrounds, but let me show you Learn to Code.

So this is an example lesson from the Learn to Code content that lives inside Swift Playgrounds.

On the left-hand side we have the lesson and the user’s code.

But for this talk we’re really going to focus on the right-hand side.

Notice the live view.

If we could actually take that full screen to get a little better view of it.

This character, Byte, that is now literally sprinting around this map to collect the gems, is moving up and down stairs and after Byte has collected all the gems, we get a congratulations sequence that lets the learner know they’ve done something great, and Byte even has a few dance moves to let people know it’s really good.

So this is our case study today.

We’re going to work through this scene on how to use SceneKit effectively and really bring the richness of 3D to your applications.

So if you’re interested in 3D but are maybe pretty new to it, you’ve come to the right place because we actually started with something very different than this.

We started with a simple 2D scene.

And one of the best parts about SceneKit is it allowed a couple of 2D programmers to take this scene on an existing timeline and make the rich 3D content that we shift in Learn to Code.

So that’s what we’re going to talk about today.

We’ve got 40 minutes and we’ve broken it down into three sections.

First, I’ll talk about prototyping and how we refined the idea to make sure it was really good.

In iterating I want to go over when you’re actually getting some real assets from your vendor, how you can establish an effective pipeline.

And then Lamont will come on stage to talk about tuning and getting your scene ready to ship.

First up is prototyping.

This is coming at a phase right after we decided on the idea for Learn to Code and we’re really ready to start building something.

So you saw this graphic before, but here it is in context.

We started off with gems that came from the emoji tray and assets we had lying around because we just needed to get something up and running as quickly as possible to test the interaction model in this new app.

So we learned a lot from this.

It didn’t matter what the assets looked like, but through this prototyping process we started to get some early feedback.

Some of those comments were about the graphics.

It was requests like could we change the gem color, could we add a border around the scene, or could we pivot the camera at the end to give a little bit of visual interest.

While this is good feedback, it really is very iterative, and when you’re prototyping, you shouldn’t be afraid to throw the whole thing away.

And so if we go back and look at the scene in context, you can see it just is very flat on this page.

So all of the feedback we’re getting is about the visual look.

We need to just re-evaluate our strategy.

So we had been working the SpriteKit and now we started exploring SceneKit.

And for many of you who are familiar with SpriteKit, you’ll know that it has these concepts; basically a scene to do your update logic, a node to place things in view, and actions to move those objects around.

Now, the benefit of having SceneKit also developed at Apple is that it has a lot of those same concepts.

So this gave us enough confidence, a simple prefix switch, to get started with SceneKit and start diving in.

So like many of you out there, we watched a WWDC presentation from 2015.

It’s a great talk given by the SceneKit Team on how to build a simple scene like the one you see here, and not only that, but we got to use some of the assets from the sample and recreate our scene.

So now we got to this stage and immediately we could tell this is just way better.

I mean, it’s far more immersive, you can freely pan the camera and it actually helps you solve the levels.

But the point of this slide is that even though we’ve definitely bumped up the visual quality, when you’re prototyping, you still want to keep that visual fidelity low.

So when we’re adding new game mechanics like these portals, we’re doing it with SceneKit primitives because we don’t want to get too hung up on making sure the scene looks absolutely great.

We want to make sure it’s a good idea first.

So if I had to throw this up on a Business Tool 101 chart, you basically look at the whole timeline of your project.

You want to allocate a large chunk of that to prototyping because it’s the phase with which you can make the biggest changes and the effort to change is low, especially when you’re working in 3D.

When you start to get real assets, and in this next section that we’re going to talk about, the effort to change goes way up.

So be mindful of that when you’re prototyping.

So in summary, really, you want to work on testing your interaction model when you’re prototyping.

It’s not about the assets.

You want to interpret the feedback you receive, but don’t look for incremental changes.

Look to make sure the idea is valid.

And then finally, take the insights from this section, believe the code.

I think one of the best decisions we made was to actually start file-new-project when moving forward past this point, which brings us to iterating.

So now we’ve got the idea and we’re ready to get some real art, so we started working with an artist and we received this early 2D comp.

So you can start to see it resembles Byte’s world now.

It looks way better.

And I want to break down this world into four parts.

The first thing we’ll talk about is how it’s constructed and some strategies to do that effectively.

Next, we’ll look at how you can accomplish complex animations in your app by looking at how we made Byte move up the stairs.

We’ll look at how you can add visual interests with water and other scenery elements.

And then we’re really focused on the visuals in this talk, but there’s a whole other area of your users that actually won’t be able to benefit from the visuals of a 3D scene, so we’re going to spend some time talking about accessibility support, and VoiceOver specifically.

All right.

First up, modeling this world.

So as you can sort of see by the stencil we built this out of blocks.

We had individual assets.

There was a couple reasons to do this.

We needed to not only iterate on the asset design, but also on the lesson design.

So we were putting together very simple puzzles like this to make sure that learners had a smooth path through the curriculum.

But rather than placing these individual blocks in a Scene Editor, which would be pretty tedious, we actually wrote some code to do this.

So much like learners use in Learn to Code 2 when they’re building their own worlds, we wrote some code to generate this and it looks like this.

So to build that world you first give it a size, 5 by 5.

You place items into it, like actor, or Byte in the scene you saw before.

And then you can add additional elements, such as gems or the water that you saw in the center.

But the reason I’m showing you this code is not because, wow, we wrote an API to build a world, that’s cool, but it’s really because this is totally independent from the graphics.

Right. This code will be just as valid in 2D as it is in 3D.

And let me show you actually what I mean by that.

I have a short video here for you.

So here we have Byte moving around this world and we’re going to add in a few nodes.

The green and red nodes you see in the scene actually represent the data that we’re reconstructing gameplay with.

And then really, that data is all we need.

We’ve actually separated the visuals of the scene from the data that’s used to reconstruct gameplay.

There are a couple of big benefits to this, and I want you to think about it when you’re modeling a 3D world.

So separate the data from visuals.

First is it allows you to swap out assets easily.

Remember, we’re still iterating on this stuff.

We will get a new version of the block any day, and we don’t want to have to rebuild those maps, so we’re dynamically generating them.

It also allows you to take that data and send it elsewhere.

Maybe you need to send it across a network or send some gameplay logic across process, like we were doing in some playgrounds.

And later on down the road it will also allow you to optimize the geometry.

And Lamont will get into that in more detail, but it’s really key that you’re not dependent on the actual nodes and scene for this to work well.

And I have one caveat.

You need some debugging tools to make this work really well.

We found that out early on.

You can’t just look at the world anymore and see how gameplay will be reconstructed, and so we actually built a pretty simple Mac app.

This app can actually load in all the levels that we have, and more than that, it has scene-specific knowledge.

So in this case this is the tool that allows us to show those debugging nodes you saw before, and it also can run hard-to-hit cases in our game, like rotating around the world when you hit the congratulations sequence.

We want to make sure that works on every map, but we don’t want to test every map all the way to the end just to see that work.

So that’s our first stop.

That’s how we put the world together.

We separated the data from visuals and we used tools.

Now on to animations.

So if we look closely at the stairs, you can see this is actually a fairly complicated piece of geometry.

Right. Not only are there individual steps, but there are little cutouts from the step.

So we want to be super precise about where our character’s foot lands on each step.

So we considered a couple different strategies, and one thing that’s pretty common for 3D scenes to use is just swap this out for a ramp because ramps are easy.

You have a character, you move that character forward, you figure out how far up they need to move, and you just translate them from point A to point B while running the walk cycle.

For stairs, not so great.

So here’s Byte trying to walk up the stairs, and if you look closely, Byte hasn’t even gotten to the first step and he’s already floating in midair.

So we need to do something a little bit better.

The second thing we considered is using a built-in type in SceneKit.

It’s actually part of SceneKit’s constraint system to be able to do inverse kinematics.

Now, inverse kinematics allow you to be super precise about where you want the character’s foot to land.

So we would specify each step where we wanted the character to take, but it comes with a sacrifice of some personality in the character.

Right. We’re not able to control the eye movement or the upper body as detailed as we’d like.

So we actually went with a third option, and that’s to bake the displacement into the animation.

And so because this is the one we went with, let me break it down for you in a little bit more detail.

Usually, most games you have a node, which represents a position, and you have geometry, which is what you’re actually seeing in the scene.

Now, it’s common for these two things to move together, so you translate the node and the geometry, while playing the walk cycle, it moves the character from point A to point B.

But for the stairs we did something different.

We leave the node alone and we apply an animation, which actually has displacement in it.

So this moves the geometry away from the node and then when that animation is complete, we synchronize the node’s position and remove the animation.

We do that with a type in SceneKit called an SCNTransaction.

And SCNTransactions allow you to make sure that update happens in one frame.

So let me show you what that looks like.

You set up the transaction with the begin and commit calls, and in our case we want an animation duration of zero because we need it to happen in the exact same frame.

We move the character to the new position and we remove the animations, making Byte ready for the next round of animations.

So let’s see this in action.

Got to stretch it out first, walks up the stairs, and you can see now, because we’re allowing our animator the freedom to put the displacement in the animation, we can be far more precise about Byte’s movements.

Byte’s head turns while it walks up and down the stairs.

So this is a much better solution and something you should consider for complex animations in your scenes.

On to train stop number three, and that’s to look at how we did the scenery elements, because it’s not all about the character.

You also need to make sure the world feels alive.

So let’s take a close look at the water.

Now, you saw before we have been using maps like this, right, fairly basic, just enough to reconstruct the puzzle with, but we want to get to a point where our maps look like this.

And the way we did that is to actually save the original map out to an SCN file so we can add in those additional elements.

Right. So instead of writing code to place each scenery element, we can do it in the Scene Editor now because it makes much more sense.

So if we look at that in the SceneKit Scene Editor, it looks great, but now we’re investing a ton of time and effort into each individual map.

You can see that by the node hierarchy on the left there.

So the problem is you still want to keep some amount of flexibility.

Make sure if the artist comes in next week and says I have a new waterfall that would look so much better, you’re not changing out at 81 maps.

And the way to do that, if we look closely at the waterfall, is to use a technique called reference nodes.

So these are the water nodes in our scene and the arrow indicates that they’re being referenced out to a single SCN file.

So you update that file in one place and it propagates through all your maps.

Now, that’s not all there is to water.

If we take a closer look at that SCN file, water kind of also has to move for it to be interesting.

Right. The artist did a great job here, the texture looks amazing, but it’s not real.

So in order to accomplish moving the water, we’re using a technique, a geometry modifier, actually writing a shader, and you access that by the button down in the lower right here.

I’ll zoom in on it for you a little bit.

It’s going to be hard to see.

And that will bring up a tray, new in Xcode 9, where you can modify or you can specify your geometry modifier built off this SCN shader geometry type provided to you by SceneKit.

Now, all this is doing is moving the texture around the waterfall, but it adds a great effect.

So let’s check it out in action.

There we go.

So now the water actually is flowing, you can just see the textures moving around and it adds this great effect.

And we use that same technique for the vines that sway in the scene and for the grass that blows in the breeze.

So this can add a lot of life to your scene and it’s a great technique for you to try out.

That’s three.

And we’ve really been focused on the visuals.

We’ve taken a number of stops to figure out how you can make that great, but there’s a whole other aspect you have to consider and that’s what the scene would look like to a visually-impaired user.

So when you’re trying to design a great experience in VoiceOver, you want to focus on things other than the visuals obviously, but without describing everything we did, I first want you to just listen to the experience.

[ Music ]

VoiceOver on.


The world is five columns by five rows.

Column 0, row 0, [inaudible] at height 0 facing north.

Double attach to switch characters.

Column 0, row 1, gem at height 0.

[ Music ]

So we did a number of things to support VoiceOver in Learn to Code, but the first thing I want you to notice is we’re actually focusing on a great nonvisual experience.

By adding music, by adding character noises, you make the scene auditorily rich.

There’s other things we added to VoiceOver, and to go for a deep dive on those techniques, you can check out a great talk this year about how to make your media and games accessible.

The one that was really important for us is actually describing the important locations using VoiceOver, and the reason I want to show you this in more detail is because it’s surprisingly easy to do.

So just like in your UIKit apps we’re overriding an accessibility element so that we can provide a custom label.

In this case we’re providing a label which is updated with the current contents of the world.

So this is basically the same technique you’re already used to in UIKit, and it really is that simple.

You create one of those elements, you specify its frame, we’re using projectPoint from SceneKit to get from 3D to 2D, and you add that element to the view.

So my point is even though 3D seems really tough to make accessible, it’s quite easy and it’s already techniques you’re familiar with.

So there are really three main reasons to support it.

One, it’s great for your users.

It’s probably one of the most rewarding aspects, that I thought, of working on this project.

One of the aspects that I thought was most rewarding.

Excuse me.

Two, is that it’s just easy to do.

It’s many of the familiar things you’re used to from UIKit development.

And three, just like there are no excuses for not making your apps in UIKit accessible, there really should be no excuses for doing so in 3D.

So if you want to see this code in full detail, you can always check out the source for Learn to Code by diving into the auxiliary sources in the Playground book, and this file actually sits in Accessibilityextensions.swift.

So that is iterating.

We talked about how you should separate your data from the visuals of your scene.

Right. We did that for modeling the world, but also how we did the stair animation.

You should value flexibility even at this phase.

So when you’re putting all that time and effort into your levels, make sure you’re doing things like using reference nodes so that you still have some flexibility.

And then finally, make sure you audit that accessibility support early.

It’s not something that can be bolted on at the end, and it’s surprisingly easy to do if you plan for it.

Now to talk about how we took these design time assets and really tuned them up to make them super performant, I’m going to invite Lamont up to the stage.

[ Applause ]

Thank you, Michael.

That’s great [applause].

Hello everyone.

I’m Lamont.

I’m an engineer on the Swift Playgrounds’ Content Team, and today I’m going to talk to you about how you can improve the performance of your SceneKit apps in terms of frame rate and user experience.

When we first started developing Learn to Code, one of the things that was critical for us was to have a really rich and detailed world, and as you can see here, that’s exactly what we have.

The waterfalls look realistic, the shadows behind the staircases look pretty good, the colors are rich and vivid.

We even have nice statues hidden in the waterfalls there.

But as you know, a good-looking application isn’t the only aspect of a great experience.

It’s also performance.

So what does that actually mean?

To have a good experience we actually want to have a really responsive frame rate.

Your users are interacting with your application, they’re pinching, they’re gesturing, they’re adding things to the scene, removing things.

You want this to be really fast and very fluid.

So I’m going to show you how we actually increased the performance of our application in Learn to Code.

Let’s take a look at one of our geometrically complex scenes.

And when I say geometrically complex, what I mean is this scene is comprised of thousands of individual geometry parts.

Now, each of these parts have to be rendered separately by the GPU.

So let’s take a look at what the performance of our application is.

SceneKit has a really useful tool called the Debug Statistics View and I want to zoom in on it now and take a look at it.

Now, you can enable this in your applications by simply setting the showStatistics property on your view to true.

If we take a look at some of the more interesting numbers on this debug view, you’ll notice we have a low frame rate, 29 frames per second.

That’s not very great.

What we actually really want is 60 frames per second minimum.

This allows us to get that fluid interaction as the users are pinching and gesturing around.

Well, what contributes to this low frame rate?

Well, let’s break it down.

What are we doing on each frame?

This number here, the rendering number, is the amount of time we’re taking to render one complete frame.

It looks like we’re taking 20.4 milliseconds.

That’s pretty slow.

If you do the math, if you want to hit 60 frames per second, you have to be under 16 milliseconds.

So what are we doing that’s taking 20 milliseconds?

Well, to aid in that, one thing we can look at is the number of draw calls, as you can see highlighted by this diamond.

Now, what’s a draw call?

A quick refresher.

When you want to draw objects in a scene, the CPU has to tell GPU, hey, draw this mesh.

The CPU draws this mesh or a geometry object, and that’s one draw call.

It looks like we have 877 of them.

That’s quite a bit of draw calls.

So what are some of the things we could do to actually increase the performance of our app?

Well, I’m going to share with you one tip.

The theme throughout the rest of the talk is how do we reduce our draw call count.

I’m going to talk to you about that in three distinct phases; geometry, which are those meshes that comprise the scene; materials, which give our scene a nice look; and lighting, which kind of brings our scene to life.

Let’s take a look at an example level that we showed earlier.

Now, this scene has lots of individual parts.

I want to focus in on one type, the grass.

Now, if you look closely, and you’re good at math, you can count, there are roughly 30 individual tiles in this screen, each have their own mesh, so the CPU has to tell the GPU to render these things one-by-one, so we get 30 draw calls.

Now, this is a small scene.

Imagine if you wanted to have like a huge expansive world with thousands of these tiles.

Now we’re going to have thousands of draw calls.

Imagine what happens to our frame rate then.

One thing you might notice is these grass tiles may not actually move, so if they don’t move, why are we drawing so many of them?

Could we draw perhaps one big mesh?

That’s possible.

There’s one rule I want you to remember, is when you’re talking to the GPU, there’s one draw call per mesh.

You have a thousand meshes, you have a thousand draw calls.

So if we look at our grass tile and we want to actually combine them somehow, that sounds like a reasonable technique.

I’m going to show you how we can do that.

Let’s say we have two geometry objects in our scene.

On our left we have a grass tile.

On our right we have another grass tile.

They both reference the same material, they have the same look, they’re just displaced and they have different locations in 3D space.

When you send this to the GPU, you’re going to get two draw calls.

If we merge the two together through a process called flattening, what we’re doing is saying let’s take all the points out of mesh A and combine them with the points in mesh B into one super-Godzilla mesh.

The beauty of this is now all these points reference just that one material and when the CPU talks to the GPU, all it has to do is say, hey, draw this one thing.

Done. Now, this sounds pretty trivial and it’s really easy to use.

You can use it in your applications by a method on SCNNode called flattenedClone, and what you want to do here is just make sure that there’s a parent node that contains the nodes that you want to flatten, and the return of this is a new flattened mesh that you can composite in your scene and replace the other nodes.

Now, this simple technique was used throughout Learn to Code.

So if we take our level that I showed you earlier, I’m going to break down the sections at which we were able to run this flattening logic.

I’m going to highlight in red the areas that were flattened together.

So you can see here, our water now the grass tile is much fewer draw calls.

And each of these red sections equates to one draw call, instead of an individual draw call.

So we were able to greatly reduce the number of draw calls in our scene.

We went from over 550 draw calls to less than 16.

This is huge.

[ Applause ]

Thank you.

Now, you have to use a bit of discretion here because you don’t want to just go all willy-nilly and, you know, flatten all the things.

The reason why is, as Michael showed you earlier, he made the water look really realistic by adding that simple shader modifier that gave us this really great effect.

So if we were to flatten everything, the whole world would kind of drip away.

That’s not what we want.

So we were selective in choosing what things to flatten.

So we flattened the waterfalls altogether.

In your worlds or games that you create you may choose to have an object that disappears or moves or scales or rotates, or somehow it gets modified.

So I think the takeaway here is based on how you want to use your geometry, if there’s something dynamic about it, you may want to keep flattening it with other things that move in the exact same way.

So I’m going to give you a couple of tips on working with flattening.

One is you want to store your nodes that you want to flatten in the same group, parent group, parent node.

And it doesn’t have to be the same type of geometry like we’re using grass tiles here, but let’s say you’re modeling a living room and you have a sofa and a chair and a table, and they don’t move relative to each other.

They’re all good candidates to put in one node group and flatten by itself.

So you want to be really aggressive here and anything that doesn’t move relative to something else, flatten it and you’ll reduce your draw call count significantly.

Now, there is a caveat here.

Learn to Code’s world is pretty static in terms of it’s a small world that’s always visible on the screen, but for your world, you may do something really expansive where you have to go into a building or you’re on a large terrain or you have levels.

So what you want to do here what you don’t want to do is atlas everything all willy-nilly because what’s going to happen is if you have an atlas of this entire world, of which only a portion is visible in the scene at any one time, you’re going to pay the performance cost of rendering all these points and all the meshes, and that’s not what you want.

A simple technique here is to simply subdivide your world into discrete chunks and then run this process on each of those chunks, and as you move from chunk-to-chunk, as the camera moves from chunk-to-chunk, you’re actually getting the benefits of flattening without paying the overhead of rendering everything in your scene at once.

Next I’d like to talk to you about materials.

Now, materials have an even bigger impact on your draw call count, and I’ll get into why.

Here’s a still frame from that movie I showed earlier, where we show that we flattened this aspect of the scene, the top of the world mostly, into one mesh.

So this is one draw call.

But the astute observers of you out there that may notice, hey, there are multiple materials here, like obviously those stonehenges don’t have the same materials as the staircase, and surely not the stones themselves.

So what’s going on here?

Let’s talk about reducing materials because that’s what we did to get this down to one draw call.

So again, we have our two geometry objects on the screen; one on the left, one on the right.

One uses the sand texture, one uses this more sand/grass looking texture.

If we run the logic that I showed you before, flattening, we actually get that one combined, Godzilla mesh, but we’re still referencing two materials.

So when the CPU has to talk to the GPU, it’s going to say, hey, take this mesh and draw the sand material with it.

Great. Okay.

Now, take this mesh again and draw the parts of it that uses the grass material again.

You still get two draw calls.

There’s an opportunity here to reduce this.

And you may have heard of texture atlasing, but maybe you don’t understand exactly what that is.

It’s a very simple process.

Your artist, and their 3D tools, would take your materials and combine them into one texture atlas and under the covers they’ll update your geometry so that it’s pointing to the right bits inside your textures, but the net result is you have one draw call.

Now, this works if you have a few objects or thousands of objects.

In Learn to Code I’ll give you an example of one of the atlases for the world that I just showed you.

This is what our atlas looked like.

We had over 70 materials that we were able to condense down into one.

That’s a huge savings.

There’s another benefit here that you may not be aware of.

When you have a material in your scene, when your scene loads, there’s a shader that has to be generated behind the scenes and you can’t start your scene until that shader’s actually finished compiling.

If you have 70 materials, you have 70 shaders.

You have a really rich level with hundreds of objects, hundreds of materials, now all of a sudden your load time you’re stuck waiting for the scene to load.

By combining it into one material, or fewer materials, you’re reducing the number of shaders that need to get compiled, boosting your startup time, not to mention lesser I/O in terms of hitting a disk.

So next I want to talk about lighting.

Now, lighting’s very interesting because lighting is what allows you to add rich, vivid detail to your world.

You can take a scene and it looks kind of static, kind of boring.

You add lights, now it gets a little interesting.

So we let our artists put a few lights in the scene and give us this visual effect.

We add a spotlight so that when the characters went around the world, they’re casting shadows as they move along the water or the grass.

It’s a nice effect.

We added omni-directional lights scattered throughout the scene to add a little visual pop.

So you can see the staircases in the foreground are highlighted a little bit more than the staircases in the background.

And lastly, we added an ambient light, which makes sure that everything in the scene is visible; otherwise, you’d have really super dark areas where the other lights don’t hit.

And you can see this sort of great effect if you look in the shadows of where the water is.

It’s really neat.

Now, there’s a performance cost to this.

Lights are not free.

Remember I told you that draw calls, the more you have the less performant you are, your application is?

Well, whenever you have a light that hits a mesh, that’s an extra draw call.

So you have five lights in your scene, you just increased your draw call count by five.

There’s a way to get around this, and it’s called a light map.

Now, what this is, is your artist, and their 3D tool, will basically run a process that calculates where the lights are.

They can go to town, put all the lights they want in there, light all the things, and it would actually precompute the intensity of the light hitting your scene and it would store it in a material, not unlike the textures we showed you earlier.

The beauty of this is that this process is not CPU or GPU intensive.

As a matter of fact, it’s free.

It’s taking me more energy to talk about it than the CPU would actually spend applying this.

So we are able to take our light count down to zero if we wanted.

Now, we didn’t.

We left one spotlight in there because we distinguished between lights that don’t change every frame, why should we render these lights again, we did it the last frame and nothing changed, but the spotlight does change.

As you rotate the world the spotlight’s going to hit the objects in a different way.

When you have your character walking around the world, its shadow is going to cast on the rest of the world in a different way.

So we kept it, but we made sure that we were specific as to what it would apply to, the characters or certain parts of the scene.

And the good thing about this is it works in tandem with all the flattening that I showed you earlier, because obviously lights act like a multiplier in terms of your draw calls.

Well, guess what.

We reduced the first term.

We reduced the number of meshes that have to be lit.

So if we actually really wanted to put another light in there, the effect wouldn’t be as bad as when we started.

So after making all of these changes, let’s take a look at our performance and see where we are now.

Let’s zoom in.

Actually, I want to take a moment to just appreciate how good the light maps look.

If you take a look behind the first staircase, you see that little dark shadow and the crevices and even behind the waterfall slightly is a little darker, around the rocks and the curved areas in the stonehenge, it looks really great.

And you know what, you’re not paying anything for it.

It was done all offline using material.

Our artist went crazy with lights and their 3D tool.

Now I’ll zoom in on the performance.

Wow, we’re hitting 60 fps now.

Our users won’t be closing our app and going to go do something else.

Let’s look at our rendering time.

We’re at 2.3 milliseconds.

Now, that’s fantastic, for a number of reasons.

Number one, remember we said you have to get under 16 milliseconds to be able to actually render at 60 fps?

Well, that new iPad that just came out, if you want to take advantage of 120 hertz technology refresh rate, you want to get that number under 7.

We’re under 2, so we’re good.

Crank it up.

So let’s talk about headroom.

The beauty of this low number is if you want to add more things to your scene, maybe you want to add a little bit more gameplay logic using GameplayKit, or you wanted to do some networking or you wanted to just make your scene rich, tell your artist, hey, crank it up, let’s make some more objects in our scene, make it more detailed, or you just want to enjoy your better battery life, you can do that.

Finally, let’s look at that draw call count.

Wow, 73. If you remember, the earlier number was over 700.

We’re using less than a tenth of the draw call amount than we had before.

We’re sipping power instead of guzzling it like we were before.

This is great.

Our app is very fluid.

And when I talk about headroom again, you have to remember you’re going to be doing other things in your app other than just rendering.

In our case, we have the Swift Compiler running on the left, potentially.

The users were editing code and clicking on menu items, and things like that.

And in your application maybe the user’s doing something like pinching or doing some selection or, you know, anything that would actually take more processing time in terms of logic in your application.

So that’s great.

So to recap, we’ve talked about flattening geometry, how this is the low-hanging fruit of performance optimizations in terms of reducing your draw call count.

We talked about using a texture atlas, which is something that you get your artist to do and you just take advantage of that will reduce the number of materials that you use, which has an impact on loading time, making your app start up much faster, and also an impact on disk I/O, making it less objects to load.

And lastly, it works in tandem with the geometry flattening to make sure that you have the fewest amount of draw calls possible.

And lastly, using light maps to add that rich visual detail to your scene and not pay any performance penalty for it on the GPU.

Let your artist go crazy.

We have a few related sessions for you.

Thanks, and have a great WWDC.

[ Applause ]

Apple, Inc. AAPL
1 Infinite Loop Cupertino CA 95014 US