[ Applause ]
Thank you for coming.
I'm Noah. I'm here with my colleague Warren and we're going to talk to you about building visually-rich user experiences.
So, there's three main areas that we're going to cover today.
The first is a platform overview.
Kind of a quick tour, kind of the lay of the land of what the graphics frameworks are on our platforms, what they can do, how you kind of fit them together.
Next, we're going to kind of dive deep into one of the more essential parts of the graphics platform coordination and some best practices that you can use to kind of avoid common pitfalls that people run into a lot with CA.
And, finally, we're going to spend a little more time on some tips and tricks.
Some kind of small, fun, like, useful things you can do to really add some cool visual flair to your apps.
So, if you spend any time digging through the documentation or just reading just like overflow or anything, you've probably seen a lot of concepts kind of come up.
And there are a lot of them and it would be great to have time to kind of go into every single one of these and just basically spend all day here, but we aren't really allowed to do that.
So, instead, we're going to keep it pretty high level.
Let's talk about what we have on iOS, macOS, and tvOS.
First off, some of you are probably already familiar with UIKit and AppKit are the frameworks with which you build the UIs on most of your applications.
They kind of provide you a standard set of system controls.
They're customizable to a degree, so you can make our sliders look way cooler than usual ones.
And they also come with some really useful kind of built-in behaviors.
You get these standard appearances and transition animations and that kind of thing, but you also get support for accessibility features and dynamic type right out of the box.
So, I said that these let you customize things to some degree, but as soon as you want to kind of go beyond just setting a tint color or an image override on something, you might have to start dropping down into something like Core Animation.
And, as you might already be aware, Core Animation underlies UIKit on iOS and is also pretty heavily integrated into AppKit on the Mac.
And so, a view on iOS is composed of a like has a layer and the stuff that appears in your application is this hierarchy of layers kind of all appearing together.
And that's what CA is good for.
It's kind of it's a thing to take existing content, put it together, display it in this hierarchy that can move around and animate and so on so forth and finally composite it all together and get it onscreen.
So, beyond that, there are a couple other frameworks that we have for some kind of more specialized purposes.
Core Graphics is one that you also might have encountered.
It lets you do kind of really elaborate customizable 2D drawings.
So, you can take a path and some colors and turn them into these kind of nice message bubbles over here.
And Core Graphics is what you use when you need to build images at runtime.
For something for building it, like beforehand, you probably use something like Photoshop or Sketch or Illustrator, but Core Graphics lets you do that on the device, you know, as your code is running.
Which is really useful.
So, Core Graphics is useful for drawing things, but there's another framework that we have for working with images that you already have and that is appropriately named Core Image.
And what that does is apply filters to things.
You provide some set of input images, you hand that to Core Image, get a and get a filter that does some thing, and it produces the final rendered output of the thing of the filter, rather.
So, you have obviously simple color transformation effects, like this kind of nice sepia that's not sepia I don't know what to call that.
Don't worry about it.
But you can also do kind of interesting stylization effects, like this cartoon drawing kind of thing.
And what's neat about Core Image is that it's really heavily optimized and we kind of do a lot of the work for you.
So, when you're doing something more complicated like chaining multiple filters together, we kind of can optimize that for you and instead of having to kind of go through this process over and over again on your images for each filter, we can kind of pipeline that.
And what's even better is that all of this stuff can run on basically whatever parallelization hardware your device has.
If you don't have much GPU resources available, it runs on the CPU and makes use of Grand Central Dispatch and all that.
Completely under the hood, transparent to you.
If it has Metal available, then it can run using that.
So, that's pretty cool.
There are a couple other kinds of contents you might want to get into your apps that we have some really useful frameworks for.
And for 3D content, we have this thing called SceneKit which basically lets you get your 3D content into your app and has some really nicely integrated tools in Xcode that let you kind of manipulate it, set up scenes, do all this.
And you'll see a little bit of what that looks like a bit later on.
SpriteKit is another framework that's kind of built heavily for games.
It gives you a lot of animation primitives and that kind of thing for having things, you know, move around and interact with each other onscreen.
So, there's one more framework that I'd like to talk about before we move on and that is one you've probably heard of called Metal.
And what Metal is is kind of the fundamental graphics API on our platforms.
It's what all of these other things are built on top of.
I mentioned that Core Image can use Metal, but SceneKit draws using Metal on devices that support it, SpriteKit does the same, and Core Animation itself is using Metal as well.
And so, Metal is it's kind of hard to describe exactly what you use it for because you can do basically anything with it if you are willing to put the work in.
But what it lets you do is just kind of get the fastest possible access to the lowest level of the graphics hardware we have available.
And so, a lot of the times you're not using Metal directly, you're using it through one of these other frameworks, but if you need to build something custom, something that isn't really supported by any of these, that's what you're using it for.
As an example, on the iPhone 7 sorry, the 7 Plus we have this kind of neat depth of field effect in the camera and that basically takes the camera image, does some kind of depth reconstruction stuff on it, some math that is way beyond me, and can figure out kind of the depth of the pixels in each at each point in the scene and then apply an intelligent kind of blur filter to them so you get this nice soft background.
And that doesn't sound like any of the other frameworks that I told you about, right?
So, that was built for iOS 10 and rebuilt for iOS 11.
I don't know if you've noticed that the effects just got better in the betas on top of Metal.
So, that was kind of a quick tour.
We just went down through all these layers and the idea here is not to kind of give you an exhaustive explanation of what each one does, but I just want to kind of have you know where things are.
So, if you have some problem that you want to solve in your app, you can kind of look back and be like I think I heard that there's a firmware that does that.
Let me check that out.
So, to give you a little bit more of an idea of how you use these and how they work, I'd like to bring up my colleague, Warren, for a demo.
[ Applause ]
So, with this series of demos, I'm going to walk you through some of the frameworks that Noah just introduced and we'll start right at the beginning with UIKit customization.
I want to say right from the start that it's not important for you to pay attention to every single line that I highlight or talk about.
The sample is already available online and you can definitely pull it down and follow along on your own time.
But, over the course of the next several minutes, I'll walk through some of the platform features that we have and how you can use them to enhance the user experience of your application.
So, to start, I'll just go ahead show you what this particular UIKit customization example does.
I have here a screen, perhaps it's from a video-editing application that allows me to control the white balance of a photograph.
As you can see, I've already customized the UI sliders at the bottom of this screen with track tint colors to signify the purpose of each slider.
So, the slider on top will control the color temperature while the slider on bottom will control the color tint.
And the combined effect of these sliders will allow us to control the white point of the photograph.
But, I might actually want to punch this up a little bit more by adding a track image.
I'll go ahead and do that now and show you the effect.
So, as you can see, I've added a gradient image to each of these sliders with rounded end caps.
And this actually creates a much nicer visual appearance and also hints a little bit further at what these sliders actually are for.
Now, if I adjust these sliders right now, you'll notice that the image itself doesn't change.
But, in the next sample, I'll show you how to actually implement the effect of modifying the image.
In the meantime, I'll walk through this particular example in a little bit more detail.
So, I instantiate a UI image and then I and then make it resizable by using one of the APIs on UI image.
And then I can actually set each of the images on the sliders.
The minimum track image and then maximum track image.
And, as I move the slider, it will be redrawn and use the appropriate image to draw the portion of the track to which it applies.
So, I mentioned that I'd show you how to actually implement this effect and that's what I'd like to do now.
So and I'll do that with Core Image.
So, as you can see, this is the exact same UI, but now when I adjust the sliders, which I might do because this image was shot underwater and I want to make it a little bit warmer and remove some of that green tint from the algae.
You can see that the image actually updates in real time.
Let's talk about how we actually do that.
So, Noah mentioned that Core Image has numerous different filters available and the filter that we'll be using appropriately enough is the CI temperature and tint filter.
Now, it bears a mention that CI filters don't actually do the hard work of image filtering in Core Image.
Rather, what does most of the work is an object called a CI context.
The CI context is basically the interface to the underlying hardware that actually performs the image filtering.
So, I've already created one of those and I'll be using it in this function, but first what I want to do is instantiate a CI image that's based on the image that's currently being displayed by the UI image view in my screen.
Now, CI temperature and tint filter has a couple of different parameters that allow me to control the white point and in particular the these parameters are called the neutral and target neutral vectors.
And each of these represents a particular color temperature and a color tint.
And so, I know that the temperature and tint that I'm shooting for are 6500 degrees Kelvin and a perfectly neutral 0, neither too magenta nor too green.
So, I construct a vector with those parameters and then I construct another vector with the parameters that come from the values of my slider.
I then construct a dictionary that holds all of these parameters, including the input image, and ask CI to create a filter for me based on the name CI temperature and tint.
Then, in order to actually perform the filtering, I'm going to use a little bit of Grand Central Dispatch to get off the main thread because image processing is a fairly intensive operation.
So, I'll dispatch to a queue that I created previously and then ask my context to create a CG image based on the output image of the filter that I just created, which will have the effect of pulling the filtered image through the filter, applying the effect.
Once that's done, I'll then dispatch asynchronously back to the main queue and set the resulting image on my UI image view once more.
Now, sometimes using UIKit or Core Image or many of these other high-level frameworks is not quite enough and you actually want to draw user interface elements on the fly.
Well, like Noah mentioned, you can actually use Core Graphics to do that.
This is a very simple application of Core Graphics, but let's say I want to draw a star polygon.
Now, in order to do that, I'll actually use an API that was introduced in iOS 10 called UI Graphics Image Renderer.
This actually gives me much easier access to a Core Graphics context that I can issue drawing commands into than in the past.
So, in order to use it, I'll basically determine where I want to draw the star.
In this case, in the center of my view.
Create a UI graphics image renderer at a particular size, and then call the image method, which takes a closure in which I'll actually issue the drawing commands.
So, it passes me in this render context and from that I can retrieve a CG context which actually has a huge bevy of methods that allow me to do things like draw paths, draw text, draw images, and fill and stroke paths.
Now, the math here is not all important.
What is most important is the API.
So, on that context, I'll set the fill color, which in this case is yellow because stars are yellow, and then I'll go ahead and position the current point of the context at the top of the star with the move method.
Move to method.
And then in a loop I'll go ahead and add lines between the inner and outer points of my polygon and once I'm done I can actually fill that path, which has the effect of filling in the star outline and creating the image that you saw.
But, of course, nobody wants to actually just have a gigantic star sitting in the middle of their application, so let's go one step further and talk about how you can actually use this as a user interface element in your own app.
We'll do that with Core Animation.
Core Animation happens to be really good at presenting bitmap content.
So, we can actually use the image that we just generated as the contents of a CALayer.
Just to show you the effect that we're after, I have this button here and UI buttons are, you know, very common and their usage their usage varies depending on the context, but if I want to add a little bit of pizazz to this particular UI button, then I can go for an effect like this.
Let me show you that one more time.
Kind of cute, right?
Now, this is a little bit ostentatious and you probably wouldn't want to do exactly this in your app, but this is strictly for illustrative purposes.
Let's talk about how to achieve that effect.
So, I'll configure some parameters for the stars that I want to display and then I'll add an action to my button to actually call this play star animation method.
In there, I'll create a loop that actually loops over the number of stars that I want to create and for each of them I'll create a CALayer.
CALayer is the workhorse of Core Animation and it's the thing that actually has a position on the screen and can be animated, moved around, and have its contents set.
So, I'll actually set its contents to a CG image that represents one of the stars that I just rendered using the class that I introduced previously.
I'll set the initial position of each star in a circle around the button and also set its balance to some initial size.
In order to actually get it to show up on the screen, I need to add it as a sublayer of the button's layer.
Now, you'll notice that of course the star is animated from their initial positions to a position that's further out along a ray from the center of the button.
So, I can do a little bit more math to calculate the final position of the star, then I can construct a transform that will simultaneously rotate, scale, and translate the layer to its final position.
I'll use CA basic animation to interpolate between the initial position of the CALayer and its destination.
I could use a more sophisticated animation like a key frame animation if I wanted to, for example, make it move along a path.
I'll set the from value to the current transformation of the layer and the to value to the transformation that I just constructed.
And I'll also use a custom duration because the default duration of 0.25 seconds is a little bit little bit fast for the effect that I'm after.
And I'll apply an ease out function so that as the star approaches its target position it will slow down.
And just to be a good citizen so I don't have a bunch of layers laying around, after several seconds I'll go ahead and remove that layer from its super view when the animation completes.
Well, that's all well and good, but sometimes you actually want to have even more elements flying around the screen and for that you might want to use one of these high-level graphics frameworks that are intended for games, but there's no reason you can't use them in your own apps.
So, let's take a quick look at SpriteKit.
Well, here's a similar here's a particle effect that I created in SpriteKit and this could easily be overlaid in your own app's contents.
So, let's take a look at how you can incorporate SpriteKit very easily into your own app.
That's literally the entire the entire code.
And what makes it easy for us to do this tersely is that we're actually relying on this SKS file that I've created previously.
And Xcode has a really nice visual editor for particle systems.
Now, you're not seeing the texture here because I created it programmatically, but you'll notice that if I open up this inspector in Xcode, I can actually control many different parameters of this particular particle system, including things like its opacity over life, initial velocity, and the distribution parameters for its velocity and so on.
So, that allows me to very to verify visually the effect that I'm after with this particle system and what's great is that once I'm done with that I don't have to write any code to achieve that effect.
It's actually already persisted in its SKS file.
Let's take one step back and talk about how to actually integrate scene SpriteKit into your application.
So, in order for anything to show up whatsoever, I need to create an SKScene.
And the SKScene is really just the stage on which all of your objects live.
And an SKScene is presented in your application within SKView.
So, I've created an SKScene that's the same size as the view that it's going to be presented in and I present the scene on the view.
I then create an SKEmitterNode that actually loads the particle system parameters and then will actually be the thing that drives the particle animation as it's running.
I'll go ahead and create one more of those famous star images and set it as the texture on my emitter so that each particle has that star shape.
And I'll position it in the middle of the screen.
I'll then go ahead and add the emitter as a child of that scene and that's really all there is to it.
I don't have to start animating.
It just SpriteKit takes over from there and just runs the animation.
So, 2D is great, but you know what's really cool, 3D.
So, let's as our final sample talk about the basics of SceneKit.
Well, here's a similar demo, except that you can see the stars are flying all over the place, bouncing off of each other, and most importantly they're in 3D and they're lit realistically.
So, let's walk through the basics of SceneKit.
Well, there's a similar idea in SceneKit to the SKScene and it's called the S in SceneKit to there's an analogous class in SceneKit to the SKScene in SpriteKit which is SC which is SCNScene and, exactly as before, this is the thing that contains all of your all of your various objects in your scene.
I'll go ahead and set it to a nice starfield text its background to a nice starfield texture, but in order to actually see anything, we'll go ahead and also create a node that represents our vantage point in SCNCamera.
I can set numerous properties on this camera, but all I really care about right now is its position.
I'll go ahead and position it a few units back from the origin so that it's pointing to the star particle system.
Then I go ahead and add it as a child node to the scene's root node.
Now, I don't actually have any content to display on the screen yet, so I can go ahead and use this framework called Model I/O and Model I/O has a lot of different facilities for working with 3D data.
In this case, I'm just going to go ahead and load a 3D model that I created previously in the form of an MDLAsset.
There's a lot of interoperation between SceneKit and Model I/O, so I can extract an SCNGeometry, which is SceneKit's notion of a 3D mesh, from the MDL mesh that I just loaded.
Now, that particular model didn't have any material information associated with it, so I can programmatically create an SCNMaterial object and set the diffuse color to yellow as before in the previous demos.
Finally, in order to actually light my objects, I'll create an SCNLight and associate it with a node and provide some properties there, including the position and the type of light.
SceneKit has a lot of different types of lights, including directional, point, and spot, and you can select the one that's appropriate for your use.
And I'll go ahead and kickoff the actual animation by calling this method start star emitter.
Well, this just creates a timer.
A timer that runs numerous times per second and actually creates an SCNNode that has an instance of this geometry that I just loaded.
I'll go ahead and add each of these, as I introduced them to the scene, to the root node, and also associate with each one a physics body so that it gets physically plausible physical animation pretty much for free.
I'll also go ahead and create a random upward velocity just so that there's a little bit of variety.
And, finally, I'll remove it each star from its parent node after some number of seconds just to clean up memory.
So, the purpose of these demos has been to walk you through the basic capabilities of the graphical frameworks that are on our system and now I'd like to bring back Noah to talk in-depth about Core Animation.
[ Applause ]
So, how many of you have used Core Animation for something?
Good. Glad to hear it.
How many of you have run into something weird or unexpected while using it?
Yeah. So, there's a couple of kind of key concepts to CA, which are sometimes not entirely intuitive at first and it's really easy to get tripped up on them.
So, I want to take you through some of those and explain kind of both what's going on and why it's happening.
I could just kind of give you the answer.
In a lot of these cases, the answer is going to be something really simple, but I think it's important to kind of know why something is happening before you start dealing with it.
So, let's start out by talking about animations, which are pretty core to the wow, I did not intend to say that.
So, there's a behavior that you'll see pretty often when you're first starting to play around with CA animations.
You'll create an animation, you'll add it to your layer, and you'll see the layer animate say from point A to point B and then it'll jump back and you go that's weird.
And the reason for this has to do with what's called the model and the presentation layers.
There's these kind of two things that exist at once and knowing the distinction between them is kind of crucial to getting why that's happening.
So, the layout that you're mostly interacting with, the one that you're setting properties on and adding animations to and so on, is the model layer.
The one that's actually onscreen is the presentation layer.
And the presentation layer is sort of a clone of the model layer that's calculated each frame from the state of the model layer and any animations that are on the model layer at that time.
So, what's happening in this previous example is that you have the animation added at this time and you can see that, like, the presentation layer follows the animation, but once it's over the animation's gone and so it goes back to what the model layer was showing.
And so, when you look on the internet, there will be a million people telling you to do exactly the wrong thing here.
What they will say is set the animation to not be removed on completion.
Set its film mode to forwards and forget about it.
And the problem with this comes from the inconsistency this introduces because after your animation is over, the model layer still has its starting value.
So, this looks right, but if you look at the value of the property on these two graphs, the presentation layer is showing something different than the model layer.
And where that becomes a problem is when for instance you're trying to align something to a layer that you've previously animated and you set it to the layer's position and it's the position that it had like two minutes ago, which is weird.
Pretty unintuitive, right?
So, the solution to this is fortunately dead simple.
You can mostly forget about the entire conceptual stuff that I just explained there.
All you have to do is when you're adding your animations to a layer, set the layer's model property at the same time.
So, in that case, at the beginning of the animation, the model layer jumps to its final state and the presentation layer animates smoothly and afterwards they're consistent.
And, better yet, there isn't still an animation sitting around on your model layer, confusing the state of affairs and kind of setting you up to be tripped up later.
So, what that looks like in code is just this.
You have your animation.
You set your duration, your from value, your to value, and then at the same time as you add it to the layer, you set the value of the actual property on it.
It's that simple.
Be aware of the model presentation layer dichotomy and apply the final state to your layers when you're adding animations to them.
So, next, I'd like to talk about something that I have run into more times than I would like to admit, which is the intersection between transforms and frames.
So, you're pretty familiar with the idea of the frame.
You set this rectangle that defines where your layer is and how big it is.
And so, you have your layer set up and you decide you want to scale it down at a transform that scales it to 25 percent.
Then later on, somewhere else in your code, you set the frame back and your layer goes back to the size that it's supposed to be.
And all seems well and good, right?
Well, the problem arises when you set another transform on the layer.
In other words, you remove the transform that's already there and set something new.
So, say you want to rotate this layer, you know, keeping it otherwise the same size and what not.
So, you set up a CGAffineTransform and give it a rotation angle and your lay does that.
Which is not right.
So, why is this happening?
Well, the thing to know is that the frame is the layer's size from outside, the bounds are the layer's size from inside, and the if you ask the layer where it's frame is, it's calculated form the bounds and the transform.
So, in other words, when your layer had come back to its original size, it was actually four times the size because it still had that scale transform on it.
So, you told it onscreen I want you to be this size and it was scaled down by four times and so it's well, like, OK, I need to be four times bigger than that.
And that seemed fine because you did it looked the same size onscreen, but then when you unset that transform, it went back to its original magnificent size there.
So, the long and short of this is if you're using transforms, be really careful about using the frame property.
It's generally better not to just set the frame.
Instead, figure out the size and position that you want the layer to be at, in other words, in this case the width and height of the frame that you wanted and then the position would be the center of the frame.
So, it's pretty much just that.
The frame is affected by the transform, it will interact with it in weird ways.
If you combine them, you're probably going to have a bad time, so just set bounds and position.
You will save yourself a lot of headache.
So, for the next couple of these, I'd like to look at them in the context of an app.
We have this shiny little app here that lets you take emoji and put them all over your pictures.
It's very original.
I thought of it over a long period of time.
And there's two things here that are kind of contributing to the look of this.
The first is the mask that's sort of cutting out the edges of this layer.
And the second is the shadow underneath it.
First, let's talk about the mask.
So, what's going on here is that we have this kind of custom rounded shape that's clipping the edges of the emoji.
And the way this works is you give CA another layer that has whatever shape you want to be to take to affect the opacity of your main layer and there we go.
And CA will take the opacity of that and use it to cut out the shape of the layer.
And you can do some kind of cool interesting things with this.
In a lot of cases, people will just use a simple shape and you can kind of fade them out, you can cut holes in them it lets you do all kinds of interesting things with your layers.
And so, if you want to achieve an affect like this where the emoji are kind of fading out at the bottom as well as being clipped by the thing, you might think, well, OK, I can use a mask to fade to, like, set the opacity of things.
I can just use two masks here, right?
Well, you can, but that's not going to go very well and the reason is performance.
Each mask has a cost.
So, when you have multiple masks in a layer, CA has to combine each one in sequence to produce the final image that ends up on screen.
So, you have your emoji layer here, the kind of fade out mask, those get combined into the fading out emoji, then you have your other layer that is containing the emoji and that is getting masked by this other shape thing and so, like, this works, but it's suboptimal.
You can see like how much time could be being spent here just drawing pixels and figuring out where things are opaque and so on and so forth.
So, the problem that this causes because CA has to draw this stuff offscreen.
Basically, in your normal drawing path, CA can just like take the contents of one thing, put it into the final image, its composite.
And take the contents of another and put it in, so on and so forth.
But, in this case, it's having to take a create a separate context to draw all this stuff in and then put it onto the screen.
And the more of these you have on there, the more that effect stacks up.
So, if you want to still achieve this effect, the best thing to do is to cheat.
Basically, just to fake the at least one of the masks that you're using.
In this case, you could get away with just putting a gradient overlay on top of this layer and then including that in the overall masked thing.
Obviously, there are some cases in which this won't work.
If your background is not a solid color, then you can't use a solid color gradient, but you can do some other things such as using take basically drawing your background into another image and then fading that out and then using that as an overlay.
And that's it's faster to do that once than to have to make CA do it every frame.
So, long and short of this is each mask has a cost.
They are fun and useful, but kind of expensive.
The size of your layers matters significantly when you're masking things.
If you have a really large layer and you have tiny corner masks on it, CA is still having to draw that entire thing, which is not ideal.
So, wherever you can, use shortcuts.
That's about it.
Next up, let's talk about shadows.
So, you're probably familiar with the CA shadow properties, shadow opacity, offset, radius, so on and so forth.
And if you use them, you know, your shadow shows up, everything seems to be fine, but if you're in an app that has a lot of views that have shadows on them, you might notice that your performance starts taking a hit.
And the reason for that is that CA has to draw the shadow for each of these individually and it has to draw it including the entire hierarchy of layers that each view contains.
So, what I mean by that is when you have that layer being shadowed, CA has to first draw its background, then draw every single sublayer wait for it and then figure out the opacity of that.
And the result of that is just this.
So, like, that seems like wasted time, right?
We just spent all this time drawing these lovely emoji, but we're only seeing their opacity.
So, basically when the contents of the sublayers don't matter, using the regular shadow system is kind of a waste of time.
What you want to use instead is the shadow path property.
And we mentioned earlier that you can use Core Graphics to draw paths that have, you know, varying shapes and then do things with them.
CG paths are also used elsewhere in the system for doing path stuff and, in this case, you just basically make a path that is the shape of your layer, hand that to CA, and it can draw that and blur it and basically produce your shadow much faster than it can by drawing the layer and all of its sublayers.
So, that's the main performance advantage of these.
The other is that if you have a lot of layers using the same shadow path instance, then CA doesn't have to redraw the shadow at all.
It can just reuse them everywhere and that's a huge performance boost.
So, when you're working with layer shadows, be aware that the blending of the layer and all its sublayers is really expensive.
You want to avoid it where you can, so use shadow paths.
Pretty simple, right?
So, we just took you through kind of a bunch of CA best practices.
Some good some ways to use the stuff that you're already using more effectively.
Let's look at some tips and tricks.
Let's look at some things that you might not be using already.
So, CA has a CALayer subclass called CAShapeLayer and what that does is take a Core Graphics path, like the ones I was mentioning earlier, you give it a stroke color and a fill color and it draws your thing.
You may have encountered this before, but one of the cool things about CA is that a ton of the properties that you have on layers are animatable.
You can change you can set up an animation with pretty much any of these properties and CA will do the right thing and interpolate it over time and give you an animated layer.
And there are some cool things you can do with that.
Obviously, you could, you know, put an animation on the stroke color or fill color, but that's seems pretty obvious, right?
What else can we do?
Well, CAShapeLayer has a couple of properties called strokeStart and strokeEnd, which let you create an effect where the layer draws itself along the length of the path or undraws itself or draws itself again.
And that's kind of fun.
You can use that for various effects in your UI where, you know, a logo might draw itself into being and then fill in or something like that.
You can also do something like of fun with the line dash mode property and specifically animating the lineDashPhase.
If you have layer who that's set to draw its stroke with a dash pattern, then you can just animate the lineDashPhase from, you know, some pixel value to some other pixel value and it will shift along the edges of the shape so you can get this kind of cool marching antsy sort of effect.
So, that's shape layers.
Let's talk about gradient layers.
Again, you might have used these before, but I would guess you might not have tried animating one.
And, again, all these properties are animatable, so you can just kind of mess with them and see what you get.
You might for instance have a layer that's a gradient layer that's masked with this star shape.
And if you animate the start point and end point from somewhere off to one side of the thing to somewhere off on the other, you get this kind of shiny, like, metal effect.
Kind of neat, right?
We've actually used that elsewhere in the iOS UI for well, sorry, the watch UI.
If you've used Apple Pay, the little kind of card swoopy in thing, that was a gradient layer.
So, yeah. The last thing that I want to talk about is layer speed.
And this one's a kind of fun trick.
It can be a little bit hard to use, but you can get some really cool effects out of it.
So, there's no kind of idea of a global time in CA.
There's only kind of relative times from one from one layer to its sublayers and its sublayers below that.
And so, a layer can have a different time scale where like if one second passes in its super layer, two seconds pass in the sublayer.
And so, the result of that is that an animation that is added to one of these layers, if it's on a layer with speed of two, it happens twice as fast.
If it's on a layer speed of 0.5, it happens half as fast.
And what this lets you do, obviously, is run your animations more or less slowly, but that seems kind of superfluous.
You could just set them to be that duration in the first place, right?
But the cool thing happens when you set the layer speed to zero.
And what happens there is that the animations don't move.
The animations pause.
And that, again, might not seem very useful for a thing called Core Animation, but the thing is that it gives you control over the animation's progress through time with a property called timeOffset.
So, what that lets you do is let the user or any other process in your app scrub through an animation interactively.
You can have an arbitrarily complex animation or series of subanimations and setting the timeOffset on one layer will have all of them progress through according to the according to where one the super layer is in its animation.
So, what that lets you do is an effect something like this where, as the user drags up and down, the layer progressively reveals itself and these things kind of animate in and out with that progress.
And so, that's kind of cool, but, like, I could just be making this up, right?
This could just be a key frame animation.
So, let's go on and look at this as a demo.
OK. So, here we have the simple test app set up.
It's got a basic view subclass that sets up all these emoji, lays them out.
Nothing too fancy here.
The key thing about it is that it vends a list of its of the emoji layers in it so that we can grab them and animate them later on.
So, if you look at our view controller code, there's, again, not too much going on here.
So, let's change that.
We're first going to just build the set of animations that make the panel layer kind of show up and one second so, we have a pair of basic animations here.
One affecting the bound's width, one affecting the height.
You'll notice that I set the duration of the first animation to one because that's kind of the longest animation that in this sequence and it makes it really easy to do the math for like how far in the animation you are, zero seconds to one second, zero to one, zero percent to 100 percent.
So, add these two things to the layer and when we run this we see the animation.
And we see it once.
Nothing much happens, right?
And we also notice that the emoji aren't animating in, which is no fun at all.
So, change this up a little bit more.
So, this is pretty straightforward.
Basically what we're doing is just setting up a series of animations, one on each layer, and giving each one a slightly later begin time.
And you'll notice that I can't just set a begin time of, you know, some number in the future.
I have to actually figure out the starting time.
The time that the parent layer thinks that it is to add to that.
If I just set it to, you know, zero plus this number, then the animation started it somewhere in the past and so everything will already appear to be done.
So, if we run this again, we now see that the view animates in, all the emoji are moving, but again this is still not interactive.
Let's fix that.
So, going to set up a variable here to keep track of how much the container panel thing is expanded and then we're going to set up a simple gesture recognizer on the view to control it.
So, what's happening here is when this pen gesture recognizer moves in the view, we get the distance that it's moved since the last time something happened.
We scale that from, you know, some pixel point on the screen to just a zero to one value and then we set that value on the current container expansion.
And, finally, so that we don't have to keep track of where the recognizer started, we just set its translation back to zero.
So, the next time it calls in, it'll say this is how I this is how far I've moved since you last set me to zero.
So, that's pretty close to it.
You pretty much from here just set the container the kind of key final piece setting the container layer speed to zero.
And wait for it and there you have it.
I start dragging, it appears.
I drag up; it disappears.
Down, up, down.
Pretty simple, right?
[ Applause ]
So, those were some quick tips and tricks that you can use with CA.
Some things you can use in your applications to make them kind of more engaging, more visually rich as one might say.
So, to kind of sum all this up, first off, broaden your tool set.
Look through the APIs and kind of get an idea of where things are, what you might be able to use to solve different types of problems.
If you need 3D, you now know, oh, I can use SceneKit for that.
You need to filter a bunch of images, Core Image.
Know the best practices.
It's not essential, but it will save you so much time and so much headache and like when somebody else on your team comes and says what is this ridiculous animation thing doing, you'll be like I know that.
So, pretty useful.
And, finally, experiment with the API.
I mentioned that like every CA layer property more or less is animatable and we only covered like five of them.
There is so much in there.
There are so many cool effects you can make if you just kind of combine things and play with them and just see what happens when you put things together.
We have some great sample code for all of these things that goes into a lot more detail than we've been able to here.
So, first off, grab the sample code from this session.
Take a look at it.
Unfortunately, the that demo is not up there yet, but we will get it up as soon as possible.
There's a bunch of sessions that you have unfortunately already missed, but I highly recommend going back to look at the videos for them.
There's some really cool stuff this year, particularly in UIKit and SceneKit.
And there is a SpriteKit one tomorrow if you feel like being up that early.
I'm not sure I will be.
And that's about it.
I hope to see you all at the bash.
[ Applause ]