Creating Custom iOS User Interfaces 

Session 221 WWDC 2014

Make your app stand out from the competition with a compelling, yet familiar, user interface, and custom controls that match your app’s personality and effectively reveal it’s key features. Learn advanced Core Animation techniques such as spring view animations, and see new visual effects such as blur and vibrancy and how they can give your apps a visual edge.

[ Silence ]

Good afternoon.

[ Applause ]

My name is Brandon Newendorp and I’m the Software Engineer on iOS, and welcome to Creating Custom iOS User Interfaces.

If you take a look at iOS today it is filled with really interesting and engaging, unique interfaces for our users.

You can see that in things like the Notification Center which takes advantage of a blur effect to give you a sense of it being at a different depth above the operating system.

Notification Center also uses effects like vibrancy to create text that is highly legible at all times.

You also see interesting interfaces like what we call the Suggestions View in Siri where we have these pieces of text that are animating on and off screen at all times, getting the sense of being pulled on and off of the screen.

We use custom UI in a lot of places across the system, and you might be thinking to yourself, “I want to build really cool stuff like this as well.”

There’s some things that you should think about that are questions we also like to ask ourselves when creating new pieces of UI for the operating system.

The first thing that you should think about is where are you going to use this UI?

How does this fit into your app and what kind of purpose does it serve for your users and your customers?

Sometimes you want to create UI that needs to represent multiple states.

For example you might be creating a download control that needs to have states like in idle, downloading, in progress, cancelled and complete states.

So you want to think about those and think about how to design and craft your control in a way that supports those different states it’s intended for.

You also want to think about whether you’re going to share this control or this piece of UI across several applications.

Starting in iOS 8 you can create frameworks for your app, which are a great way to share code between apps that your team is creating or even release it to the public to help other people use the same kind of pieces of UI.

And then the last thing that you might want to think about are what pieces of API can you leverage to make your job easier?

A lot of times we have UI or controls in UIKit that will make it very easy for you to create custom things without having to subclass code.

So what are some APIs that you might find useful if you were to want to take advantage of things we offer in UIKit?

One of those is the UIAppearance system.

UIAppearance allows you to configure certain characteristics of the default UIKit controls without having to subclass them.

One of the most popular uses for UIAppearance is the tint color which allows you to configure the color of controls that we provide as part of UIKit without having to subclass them.

One particularly useful piece of the UIAppearance system is the UIImageRenderingMode.

This is something that we introduced in iOS 7 last year, that allows you to provide a single image into your application.

We take a look at the alpha channel from that image and can apply tint colors to it.

So if you would like to change the color of this image during your app’s lifecycle or if your design team would like to change the tint color of your application while it’s in development it’s very, very easy to apply universally across your app.

We introduced Dynamic Type with iOS 7 which gives you a really great way to respect your users request for larger or smaller type across the system and it’s important that you do the best you can to honor those preferences to help things be more legible for your users.

Accessibility is a critical part of iOS.

We care very deeply about making sure our operating system is accessible to as many people as possible and there’s some very powerful APIs around accessibility that are important for you to use if you decide to build custom UI in your application.

There’s no reason to not support accessibility in your app.

And finally localization, another very critical part of iOS.

We really on localization to extend iOS to as many customers as possible and there’s some new localization tools in iOS 8 that make this even easier for you.

Beyond those I’d like to introduce four new topics today.

The first is we’re going to talk about spring animations, which are a great way to make your controls feel like a native part of iOS.

The next thing are some new techniques for vibrancy and blur.

After that we’re going to talk about CAShapeLayers.

CAShapeLayers are a great way that you can draw custom UI within your application and animate changes to it.

And then finally, we’re going to talk about Dynamic Core Animation behaviors and how you can make changes and have even more control than you may realize over the behaviors that Core Animation provides today.

Let’s get started with Spring Animations.

The first thing to understand is you need to understand what a Spring Animation is, and you probably think of a Spring Animation as a bouncing effect that mimics a spring or a slinky in the real world.

And while you can create very bouncy effects like that with Spring Animations that’s not the only things you do with springs.

Really you can think about Spring Animations as a new default timing curve for animations within your applications and in fact we use that these new timing curves for nearly every system animation starting in iOS 7.

Nearly everything you see that we provide as part of iOS is built around Spring Animations now.

You can actually create new animations or Spring Animations in your app without the use of UIKit Dynamics.

Dynamics is an incredibly powerful tool for creating physics simulations within your app but there’s easier ways to get at Spring Animations.

In fact the API for making a Spring Animation in your app is nearly the same as using the animateWithDuration block that you’re used to using today on UIView.

So what do Spring Animations look like?

On the, uh, left side of the screen (for you guys) I have a red box that’s going to animate using the default Ease in/Ease out timing curve.

On the right side we have a blue box that’s going to animate with a Spring Animation.

Both animations will use the same duration and they’re moving the same distance.

But when the animations start you’ll notice that the Spring Animation very quickly gets up to speed and then slowly tappers off as it reaches its final position compared to the Ease In/Ease Out animation.

Let’s slow that down a little bit more so you can take a closer look.

You can see when the animation begins the Spring Animation quickly moves up to speed and then spends a considerable amount of its animation duration reaching that final position, its final resting state.

Compare this with Ease in/Ease out where it takes a little bit more time to get up to speed.

Another way to visualize this is to plot what that timing curve looks like over time.

You can see a default curve very slowly builds up its velocity and then immediately starts to slow down its velocity for the end.

Spring Animations, however, launch up to speed almost immediately and then spend the last third or so of their duration reaching what we call the “long tail” or that slow bit where we finally reach the final position.

Where do we use Spring Animations in iOS today?

One place is launching apps and opening and closing folders.

If you pay close attention to these animations you see it feels like the animation begins and very quickly shows the user the content that they’re about to see and then, when closing, very quickly pulls the folder out of the way.

Another place that we like to use Spring Animations is in the default push and pop animations for navigation controllers, as well as presenting modal views on and off of the screen.

And finally, that Siri suggestions view that we looked at earlier is built with Spring Animations.

We use a single Spring Animation to pull the text onscreen, off the bottom, and allow it to slowly drift across the screen, and then a second Spring Animation to quickly pull it out of the way, making room for the next set of strings.

What does the API look like for Spring Animations?

It’s like I said, there’s a single method on UIView that makes this really straightforward for you to take advantage of Spring Animations in your app.

Many of these parameters look familiar to you.

But I’d like to call out the two that are unique to Spring Animations and those are the damping and initialSpringVelocity parameters.

Damping takes a value from 0 to 1 and controls how much resistance the spring has to completing the animation at the end.

The initialSpringVelocity is used to give the animation a kick to push the object into the animation curve.

You can also use it to synchronize the animation with something that’s in flight today.

Let’s take a look at how these different values have an effect on Spring Animations.

We’re going to start out with the same blue box and a damping of 0.1, and you can see when the animation completes there’s a lot of oscillation at the end.

It feels like you would expect a spring to be in the real world.

You might want this effect…

but you might not.

By stepping the damping up to something like 0.5 you can see we considerably reduce how much oscillation we get with this animation.

There’s still a fair amount of bouncing at the end but it’s much more controlled.

Bringing the damping up to 0.8 reduces most of that damping effect.

You can see there’s a very small bounce back at the end of the animation but for the most part the animation just smoothly reaches the end.

And finally bringing the damping up to 1.0 gives us no oscillation at the end.

It just smoothly reaches that final position giving a very nice, fast feeling to the animation.

So let’s bring in the initial velocity parameter.

We’re going to leave damping the same but give our spring a much stronger kick to start the animation.

You can see it almost immediately launches the box past its first position and actually overcomes the spring to have it bounce back a little bit at the end.

An even stronger initial velocity can blow even further past that final position but the damping value very quickly pulls it into rest.

And then finally we can combine these to make very unique effects giving us a damping value that resists oscillation and an initial velocity to give a nice fast kick to our spring.

So where would you want to think about using Spring Animations in your application?

They’re a fantastic substitute for linear animations that you’re used to using with the current UIView animateWithDuration.

In fact, any place that you would like to fit into the native UIKit controls that we provide you should consider using Spring Animations instead, because nearly every animation that’s part of UIKit, that’s part of the OS, is built around Spring Animations.

Spring Animations also give your controls and your animations a more natural feeling to the users, because springs are something that are part of the real world around us.

It’s also important to keep in mind that while I’ve showed you examples with the position property you can use Spring Animations to apply to any animatable property in UIKit.

That means if you want to have the alpha for your layers or your views change you can do that on a spring timing curve.

I have a demo application put together and let’s take a look at what Spring Animations can do for us there.

So this is my demo application.

I have this very nice rainbow gradient background and then a control bar at the bottom where I have some custom controls that I can use to configure that gradient’s rendering.

I can change the number of colors and I can also pick how many colors repeat within the gradient.

Now these controls are currently being animated on and off screen with the default Ease in/Ease out timing curve and it feels very…

very normal, but it also feels like there’s a lot of time spent with the control getting in the way of the user actually interacting with it.

So I’m going to substitute in a pair of spring animations.

So in my demo code I have a couple of blocks that I’m doing animations in already that I use to present and dismiss the modal view, and you can see here I’m already using UIView animateWithDuration and passing it in Animation Block.

What I’m going to do is just replace that with a Spring Animation.

You can see the API is almost exactly what we had before, just with a couple of extra arguments.

For my presentation we’re going to use a damping value of .75 and if you remember the boxes on screen this will give us just a little bit of oscillation or a little bit of bouncing at the end of the animation.

I’m also going to use an initial spring velocity of 10 to give it even more of a kick to get that control on screen quickly.

We’re also going to take a look at my dismiss animation.

Again right now we’re using the same animateWithDuration and we’re going to substitute in another Spring Animation.

For the dismiss I’m going to use a very strong damping of 1 because I don’t really want it to be bouncing when it reaches its final position in the dock and I’m going to give it no initial velocity to just let the spring itself pull that view back into the dock.

With those changes let’s take a look at what that does to the app.

Just as a quick reminder: this is what it looks like right now, without the Spring Animations.

The control very smoothly slides in and out but it still feels a little bit like it’s getting in the way.

When we switch over to a version with the Spring Animations you can see there’s a very different but subtle effect.

The spring quickly pulls the view onscreen.

You can see that very small bounce when it presents, but it feels like it gets on and off the screen and available to the user much faster than it did before.

And that’s how we can use Spring Animations to bring a new feeling to animations within your app.

The next thing I’d like to talk about is something called UIVisualEffectView.

Before we get to that however, I’d like to spend a couple of minutes talking about how we render things on iOS.

Some of the APIs that we’re going to be talking about today can have a dramatic effect on the rendering performance of your application and it’s important to understand how they actually work within the rendering system.

So if you were to take a look at the entire system there’s largely four steps that we take to render content for your application.

We give your app a chance to handle events and then commit a transaction to the render server.

The render server, which is part of the system on iOS, processes that transaction, does some more work and then composites layers which are handed off to the GPU to be rendered.

The GPU will take its own block of time to render all those layers and then hand it off to the display to present to your user.

Let’s focus in just a little bit on the steps that we take within your application.

There’s largely four things that take place within your app.

The first thing we’ll do is do any setup work and ask your views to lay themselves out.

If you’re using auto layout we’re going to take the time here to compute the layout for your app.

If you’re doing manual layout for your views we’re going to call layout subviews and ask your views to lay themselves out.

The next thing we’ll do is do any necessary drawing.

If any views need to draw themselves, if you’ve implemented drawRect on some of your views, we’ll ask that to occur here.

We’ll also do some string drawing at this time.

Next we’re going to give Core Animation a little bit of time to prepare some images if it needs to decode or convert them for rendering before we finally package up the layers and send them off to the render server.

This is a very, very broad overview of how the render server works.

If you’d like to find out more there was a fantastic talk yesterday by the performance team that I would really encourage you to check out.

They go into a lot of great detail on how all these depths actually work.

So with that quick understanding, let’s go back to UIVisualEffectView.

Last year at WWDC we introduced a new API called drawViewHierarchyInRect, and a system to go with that for creating fast, static blur effects within your application and we continue to recommend you to take advantage of this technique if the content that you want to show behind the blur is not actively changing for the user.

The reason for this is it’s just incredibly well optimized for your application.

So we encourage you to continue using that as much as possible.

However, we heard that one or two of you wanted something a little bit more advanced.

You wanted to do live blur effects like we have across iOS, and to that end we are introducing UIVisualEffectView.

UIVisualEffectView creates two effect types.

It’s a technique for creating live blurs in your application and for creating vibrancy effects.

Let’s take a look at what both of those effects actually mean.

A live blur effect is exactly what the name suggests.

It’s where you can have content behind a view that is blurred, and has some color effects changed to it as well, all rendered in real time on your device.

Vibrancy is used to create content that is highly legible at all times on top of a blur.

In the screenshot you see right now the Facetime icon and the text are being rendered with vibrancy and you can see that that text is always legible no matter what colors or content are being blurred behind it.

[ Applause ]

We have three styles for blur effects.

The first of those is a Dark blur.

You can see that we’re darkening the background here, we’re also desaturating the colors a little bit.

It’s more than just changing a blur.

A Light blur effect is almost the opposite of that.

Again we’re blurring, but we’re starting to wash out some of the colors as well.

Finally we introduce ExtraLight blur, which almost entirely desaturates the content and blows out most of the colors, but still gives you a blur effect to give you a sense of what content is behind the view.

So what steps do we make to take to create a blur effect?

As I said, it’s more than just a simple Gaussian blur within your application.

The first thing we do when we create a blur effect is we’re going to downsample all of the content that’s waiting to be blurred.

The reason we’re doing that is for performance.

You aren’t going to see the fine details of the view when we’re going to blur it.

So we go ahead and downsample that to a much smaller size before applying blur effect.

The next thing we do is modify the colors and this is based on what blur effect you requested.

We’ll either desaturate or oversaturate colors to create the various styles.

And then the final step is we’re going to compute the blurs and then composite this back into your application.

Vibrancy effect, as I said, is a technique for making legible content that is typically placed on top of a blur.

And we create this with a few steps as well.

The first thing we’ll do is boost the saturation of your content and then we’re going to apply a custom blend mode to the vibrant content and that blend mode is going to change based on what kind of blur you expect to put the vibrant effect on top of that with.

And for that reason you need to tell us what kind of blur you’re using when you create a vibrant effect.

So how do we use this in your application?

The first thing you’ll do is initialize a new UIVisualEffect, and you can initialize either a UIBlurEffect or a UIVibrancyEffect.

UIVibrancyEffect actually takes the blur in its initializer so we know how to create that vibrancy effect for the blur you plan to use.

After that you can instantiate a UIVisualEffectView and pass that visual effect that you’ve just created.

Finally, the UIVisualEffectView provides a content view and you should add your subviews to that content view.

That way we know what views you want to have manipulated or effected by the visual effects.

There’s a couple of things you can do to customize VisualEffectViews in your application.

The first thing you can do is you can tint the blurs by changing the backgroundColor property on the VisualEffectView’s contentView.

So if you would like to create a dark red blur effect, for example, you can make a dark blur and then set the contentView’s backgroundColor to a red color to have the desired effect.

You can also do some animations to the frame changes within your application to change the position and size of the blur effect.

Now visual effects also come with a number of caveats when you consider using them in your application.

The first of those is the idea of setting alpha.

If you think about the purposes of blurs in alpha, they’re at odds with each other.

You apply alpha to a view in order to make content legible behind another view; to see what’s back there.

You use a blur effect to show that there’s something there but you’re not interested in the details.

So it’s not really sensical to apply an alpha to a blur at the same time and for that reason we’re going to drop the blur feature if you try to change the alpha.

You should also be very cautious about placing VisualEffectViews in a view hierarchy that uses masks.

They’re not going to work very well together.

You also should be conscious of whether you’re going to if you place a Visual Effect View in groups, they can have an impact on its rendering: either animation groups or opacity groups.

It’s very easy to have to forget that you have, say, a blur or a vibrancy effect really deep in your view hierarchy and then, when you add an animation over here, forget that it’s going to pick up that VisualEffectView and try to make changes to it.

So I started this section by talking about the rendering pipeline and I’d like to highlight what impact VisualEffectViews have on the rendering of your application.

All of the work for VisualEffectViews is having an impact on the GPU.

So if we look at a standard view in your application it’s going to take a pretty typical amount of time to render on the GPU.

In order to create blur effects we have to take all of those steps that were outlined and render those offscreen.

It’s just the way that we have to create blur effects.

So we’re going to ask the GPU to stop rendering your app’s main views, and create this blur effect offscreen, and then combine everything back together…

and that can take considerably more time than rendering a standard view without blur effects.

Vibrancy takes additional passes in addition to the blur effect that they’re placed on top of.

So to add vibrancy into a view takes even more offscreen time for your application.

Now to contrast this with drawViewHierarchyInRect, we can do that considerably faster because we don’t have to do all of those offscreen passes every frame.

So this is why we still strongly encourage the static blur techniques from last year if at all possible.

Let’s take a look at how we can use VisualEffectViews in that same demo application.

So going back to the demo app, I still have these controls that come onscreen and right now they’re just being placed on top of a dimming view and you can see I have this white text label at the bottom that’s not really legible anyway and it just doesn’t look very interesting.

We’re going to use we also have a control bar at the bottom which is just solid white right now and kind of feels out of place on this rich, colorful gradient background.

So we’re going to use blur effects and vibrancy effects to improve this.

So back in my demo application the first thing we’re going to do is go to the gradient view controller and we’re going to set up that control bar at the bottom of the screen.

The control bar is that white bar at the bottom with the different controls in it and we’re going to create an ExtraLight blur effect, and the key things that you should take a look at here are: the first part, where we create a new UI blur effect and give it a style.

In this case we’re using an ExtraLight blur effect style.

Next we’re going to create a UIVisualEffectView and pass it the effect that we just created, that ExtraLight blur effect.

And then finally we’ll add this to our view hierarchy.

We’re also going to do this on the clickwheel controller and I want to do both the blurred background and a vibrant title here.

First we’re going to make that blurred background view.

Very similar to before we’re going to make a UI blur effect but give it the Dark blur effect style and then again create a UIVisualEffectView and pass it that blur effect that we just created and then add it to our view hierarchy.

The next thing we do is we want to modify that title label to use vibrancy.

You can see right now I’m creating my label and just giving it a white color.

It’s pretty straightforward to upgrade that to use vibrancy.

To do that we’re going to make a vibrancy effect and you notice the initializer is effectForBlurEffect.

We’re going to grab a reference to the blur effect off of that backdropView that we created right up here, and pass that into the vibrancy effect because we plan to composite this title label on top of that background blur.

Same as before, we’ll make a new UIVisualEffectView and give it the vibrancy effect that we just created, and then down here at the bottom we’re going to add my title label to the content view of that new vibrancyEffectView.

So those are the three steps I needed to take to add blurs and vibrancy to the demo application.

Let’s take a look at what those look like.

You immediately see that we’re picking up that blur effect, that ExtraLight blur effect, at the control bar at the bottom of the screen.

It’s reflecting the purple color of the gradient behind it.

When we bring our controls onscreen you can see that we’re blurring that gradient as well as picking up vibrancy for that text label at the bottom.

You can see the purple and the white from the views behind it taking place.

Now you’re probably thinking, “Well that’s great Brandon, but you’re probably just faking it with that live blur or that static blur thing.”

I promise you we’re not.

Because if I start scrolling this you can see the blur effects are all taking place in real time and you can see that the vibrancy is also picking up all those really cool colors behind everything.

You’ll also see that control bar picking up those colors behind itself as well.

It’s that easy to add vibrancy and blur to your application.

[ Applause ]

So moving on, next we’re going to talk about CAShapeLayer.

CAShapeLayer is a really powerful tool for drawing custom Bézier paths within your application.

CAShapeLayer, as the name suggests, is a subclass of CALayer, which makes it very, very easy to use in all of your app’s existing view hierarchies because on iOS all of your views are already backed with CALayers.

The other really powerful part of CAShapeLayer is that all of the properties on it that control its appearance are animatable.

So where am I going to use this Shape Layer?

One place is if you were going to create a download control similar to the one that you see in the music app or of different store apps.

These need to have a few different states: one that shows a thinking state (where there’s a ring that’s partially completed, that’s spinning); it shows a state with a stop; and then it also shows a download progress (where it completes the ring, going from zero to 100%).

This would be a great thing to draw with CAShapeLayering in your application.

I’d like to show you a few of the more interesting properties on Shape Layers right now.

The most important one is the path property.

As the name suggests, this defines what your shape layer actually is going to look like.

You create these by handing us a CGPathRef.

Personally, I like to work with UIBezierPath and then hand off the CGPath version of that at the end.

So there’s a few steps that we’re going to take to create the curve that you see onscreen.

You can think about creating paths as very similar to drawing on a piece of paper, where you move your pencil to a point and then move the pencil on the paper to draw lines.

The first thing we’re going to do is move to a specific point where we want to start the line.

In this case we’re going to start on the lower left-hand corner.

The next thing we’ll do is add two curves to the point and you’ll notice we pass this a pair of control points.

Bézier paths are defined by a point and then a pair of control handles which you see in blue on the diagram above.

Those control points define what the curve looks like between the first the previous and the current point.

So we’re going to add two curves with their control points to my path and then finally set that path on my shape layer.

The next set of properties control what the line looks like and there’s a few particularly interesting properties on the line.

The first is the lineCap property.

This defines what the ends of each line look like.

By default they’re just a flat line.

In this case we’re going to set a round lineCap on them to give them a nice, rounded effect.

The lineDashPattern defines how we draw dashed lines and it gives you a lot of control over what that looks like.

You create a lineDashPattern by giving us an array of NSNumbers and we go through this array to decide how many points to draw, and then how many points to not draw.

So in this example, we’re asking the CAShapeLayer to draw for two points and then not draw for four points, draw for six and then not draw for the final six, and then repeat that pattern across the length of the line.

LineDashPhase is used to control how far into that array we start the line drawing.

So by setting a lineDashPhase of four it essentially shifts that dash pattern four points along the line.

The next set of properties relate to the stroke of the line, which is what we actually draw it like.

The most obvious one is called strokeColor and, as you would expect, it defines the color of the line.

Pretty straightforward to use I hope.

The next two are a little bit more interesting and that’s the strokeBegin and strokeEnd properties.

StrokeBegin defines how far into the line we’re going to begin stroking it or drawing it.

In the video above, I’m actually drawing the original path in a dark red behind that, so that you can see the difference between the actual what the code is doing.

So by setting our stroke begin to 0.2 we’re going to skip drawing the first 20% of the line.

StrokeEnd is very similar but defines how far from the end we’re going to draw that path.

So by saying it’s 0.6 we’re going to not draw the final 60% of the path.

You can imagine this particular property will be very powerful if you were creating a download control.

Because you could take that download progress from zero to 100% and directly map that onto the strokeEnd property on CAShapeLayer, which is again animated so the shape layer is automatically going to animate between states for you.

The final property that you’ll probably find useful in some cases is the fillColor.

You don’t always want to add fillColor.

It doesn’t always look awesome, but it’s a very powerful thing to have if you want more of a closed path.

Shape layers also can have a big impact on the rendering performance of your application.

So let’s go back to that rendering pipeline one more time.

In this case shape layers have an impact on all the rendering that occurs within your application itself and there’s a few things that we want to talk about when you’re rendering shape layers.

When you have a shape layer we have to rasterize the shape layer on the CPU within your application.

What that means is when we’re doing all of your view drawing we’re going to actually have to ask the shape layer to compute itself and then draw it into, essentially, an image that we can composite later.

We’re going to send that rasterized layer over to the render server for later compositing and this process can get very expensive in terms of CPU time.

Especially if you have a very complex path for your shape layer.

The more complex it is, the longer it’s going to take us to draw and prepare that shape layer.

So one thing you could do to minimize that effect is you can use more layers with less complexity and then stack them behind each other to get the same effect that you’re asking for.

For example, with that download control, that’s showing the download state: you might want to draw the square as one shape layer, the outer ring as a second shape layer and then show the download progress as a third shape layer and then composite the three of them together by just stacking them in the same place in your view hierarchy.

You should also be very conscious of frequent changes to shape layers within your application.

Every time a shape layer changes we have to re-rasterize it and send that new result over to the render server.

So if your shape layer is changing every frame we’re constantly having to redraw it and hand this new object off which can get very expensive and have a negative impact on your app’s performance.

I actually have a shape layer hidden in the demo application that we’ve been taking a look at, so I’d like to show you how that’s implemented and then we’ll make a couple of quick changes to the appearance of my shape layer.

So first let’s see where that’s at.

If we look in the demo application you probably notice that there’s two controls that I haven’t touched yet and those effect the line end and the dash size.

So let’s bring the stroke end all the way up to 100% and you see I suddenly get this nice red box.

It’s the world’s most interesting shape layer.

I also have a dash size property that controls the length of the dashes in that array.

So if I bring my dash size up we’re creating a dashed line effect instead.

If we bring the stroke end back down to something else like, 68% you can see we’re only drawing the first 68% of the line.

My shape layer’s line starts in the upper left corner and then progress around the box from there.

Let’s take a look at how this is implemented in code and make a couple of quick changes to it.

What I’ve done is I’ve created a custom UIView subclass that I’m calling Shape View and my Shape View, like all views in iOS, is backed with a CALayer.

We give you the ability to actually override what kind of layer that your view is backed with by implementing the Layer Class Method, and I’ve asked my view to be backed with a CAShapeLayer instead.

In my awakeFromNib I’m going to do some configuration work to create that shape layer.

The first thing we’ll do is create a CGPath Ref and create it with a rectangle that represents my view’s bounds.

That’s how we’re creating that nice, simple, square shape layer.

We set the path and then spend some time configuring a few of the properties of my shape layer.

The stroke color, again, is what defines the color we use to draw the outside of the path width in this case a red color.

We’re setting our line width to six to make it nice and visible on top of the background and I had the strokeEnd start at zero, so that none of you would see the shape layer in advance.

I also have the line Join.

LineJoin defines what the corners of those lines look like when they come together and we’re defaulting that to a round join.

We’re also setting the lineCap (which is, again, what controls the edges of each line) and making those round as well.

And finally I’m not using a fill color for my shape layer.

Down here at the bottom I have two methods that are used to configure properties based on those controls that you see onscreen.

One controls the strokeEnd, the other is setting that lineDashLength.

Right now what I’m doing is making an array that creates two NSNumbers that use the same value for both the distance to draw and the distance not to draw and we’re going to make a change to that first.

I’ve decided that the dashes just look too close to each other so instead what we’re going to do is first use the same NSNumber to draw for a distance and then we’re going to multiply that by four to have a much larger gap between our dashes.

I’m also going to make a quick change to the line joint of my shape layer and switch over to a bevel line join.

Let’s take a look at those changes in the demo application now.

So again we’re going to bring that strokeEnd all the way up first to see the changes we’ve made and the first thing you see is the change that we made to the line join.

If you take a look at the corners of my boxes, instead of having that nice, rounded effect they had this nice, like, diamond-cut chamfer around the edges.

It looks fantastic.

We’re also going to take a look at those dash size changes we made so we’ll dial that up a bit and you can see a dramatic difference in what we had before.

We’re drawing a very short dash followed by a much longer gap.

And again, just to illustrate those changes, let’s dial some of these effects back and you can again see the difference that makes on our shape layer.

Now you might have noticed in code that I have a tap gesture recognizer setup on my shape layer.

Let’s take a look at what that’s doing.

Every time I tap on my shape layer I’m picking a random alpha value to animate to and changing the opacity of my view and of the shape layer itself.

It’s an interesting effect but it’s kind of boring and I’d like to do something a little bit more interesting for my users and to do that we’re going to take a look at how we can create Dynamic Behaviors with Core Animation.

As you’re probably aware UIKit and UIView provide a lot of interesting animations for your application and you have a lot of control over how these animations look and behave.

Core Animation provides implicit animations on your layers.

Nearly every property that you would change on a layer would be implicitly animated by Core Animation.

That means you set the property and CA will just provide an animation from the current value to the new value for you automatically.

What you might not be aware of is you as a developer have the ability to override the implicit behaviors that Core Animation provides to you.

And you might be thinking, “That’s nice.

What would I want to do with that?”

One thing you could do is you could just disable the implicit animations.

You could just say, “You know, I don’t want any animations.

Thanks anyway.

Don’t do anything at all.”

And that’s fine.

You can also use this capability to change the behavior of the default animations that Core Animation provides.

For example you might want to have your layer or your view show a different color while the opacity animation takes place.

You also might decide that you’re implementing OS X Weed and you want to have that path be less than linear as it moves from point A to point B.

If you were to create a new CALayer subclass you can also add your own properties and make them animatable with this technique.

The key part of this is that we can use this technique to define behaviors that are inherent to custom views within your application.

This encapsulates any custom animation behaviors you’d like and bakes them into the view itself.

So if you want to reuse this view elsewhere in your app or in a framework it just comes along as part of the view.

How do these work?

In this case I have my view which, like all views in iOS, is backed with a CALayer.

CALayers have a CALayerDelegate protocol that all of your views implement and your view is always the delegate for its layer.

So let’s say in my application I want to animate the alpha.

I’m calling UIView animateWithDuration and setting the alpha on my view to 0.2.

Let’s take a look at the steps that take place when we request this.

Your CALayer is going to ask its delegates actionForLayer:forKey and that’s going to be called on my custom view.

If you haven’t implemented this it will automatically go to the superclass.

If you have implemented it, you have a couple of options.

The first thing you can do is call back to your superclass and just say actionForLayer: key and pass the same result back on to Core Animation.

If you decide that you would like to have a little bit more control over the animation instead, you can create a new action result and pass that back.

You create objects to conform to the CAAction protocol and hand that new action object back to Core Animation.

The CAAction protocol defines a single method; runActionForKey: object arguments.

And you can implement the CAAction protocol on any object.

Typically you just want to make this on an NSObject a very nice, lightweight object to pass back to Core Animation.

You can do whatever you would like in the implementation of runActionForKey.

Typically you’re going to want to actually run some animations there but you might decide you’d like to do something else: fire some notifications, start or cancel timers, keep count of something, whatever your minds come up with.

You can do whatever you like in runActionForKey.

There’s two arguments in here that are particularly interesting.

The first is the key.

This specifies what property just changed on the CALayer that you’re being asked to interact with.

The second is the object argument.

This is providing a reference to the CALayer that’s being animated right now, or having values changed on it right now, and this is how you have access to the new values that have been set as well as how you can apply these changes to the layer that you’ve been manipulating.

It’s hard to talk about this in slides so instead let’s take a look at it back in the demo application.

We’re going to go back to my shapeView class and I’ve decided I’d like to add some custom behavior for when the opacity changes on my layer and on my view.

The first thing I need to do is create a new object that conforms to the CAAction protocol.

So to do that I’ve defined an opacity action object and it has a single method in its implementation; runActionForKey: object arguments.

Let’s take a look at some of the steps we’re making here.

The first thing we’re going to do is check what the event is that’s being asked for this animation or for this action and we’re only interested in doing things here if the layer’s opacity was asked to change.

I’m going to give myself a nice pointer to the CALayer that was passed in and define a duration to use for all of my animations.

I’m going to create three CA basic animations for my application that take place when this opacity change occurs.

What I’ve decided I’d like to do is have the background color of my layer change as the opacity animates as well and we’re going to do that with these few steps.

Now the things I’d like you to notice is: first, the fromValue.

Core Animation needs to know what to start an animation with and what to end it with.

The fromValue is defining the beginning of that.

Now my layer already has the new value set on it when my action object is called…

which means I need to know what it looks like onscreen, and layers provide what they call a presentation layer.

The presentation layer is a representation of what the layer looks like onscreen to the user at that instant in time.

So I’m asking my presentation layer for its current background color to be the starting point of this animation.

My toValue is the new value that we’re requesting.

And what I’m doing here is animating changes to the hue value and leaving the saturation and brightness alone.

If you’re not familiar with hue, saturation and brightness you can think about a color wheel where the hue is defining what point along that circle, what color we’re using.

So I’m going to take the opacity value and use that to define my hue.

I’m also going to make the same animation to the stroke color of my shape layer, but this time we’re going to use some modifications to the value that we’re setting on the hue.

And then finally I do in fact want to animate the opacity myself and since I’m overriding the action I have to provide my own opacity animation as well.

So I’ll set that using the same presentation layer and the toValue, and that’s all there is to making a new action object.

Now I need to make sure that my action object is actually going to get called.

And we’re going to do that within my layers implementation – or, my views implementation itself.

So my view is going to implement actionForLayer:forKey.

This is that CALayerDelegate method.

It’s pretty straightforward.

I’m going to check to see if I’m being requested to make to provide an action for the opacity value.

If I do, I alloc-init a new instance of my opacity action and hand it back to Core Animation.

In all other cases I’m going to call through to my superclass and allow the default behaviors to be returned.

And those are the steps we need to take to provide a custom CA action within your application.

Let’s see what effect that has on our box.

So we’re back in the demo app again and I’m going to bring that box all the way up for full visibility and to make sure that it’s, y’know, really going to stand out we’ll bring some more colors into things as well.


I like it with less colors, so we’ll do that; put a little bit of a dash size on, just for fun.

Now if you remember, the way this is configured is I’m going to tap on the box which will pick a random opacity value and animate to that over the newly defined 0.75 second duration.

Whenever we get that new opacity value we’re also going to animate the hue of the background color and the stroke.

So you can see: every time I tap on that we’re getting a very interesting set of color combinations and we’re also still making changes to the background.

It’s a very interesting and very unique effect.

We can see the opacity better this way.

But the opacity is changing alongside the hue for the background and the stroke colors.

And the really powerful part of the way I’ve built this is: all of these behaviors are inherently part of my custom view.

So any time I use that view I can add it to my view hierarchy anywhere in the app and I don’t have to remember to animate all three properties.

I just set the alpha and it automatically gets all the new behaviors that I’ve requested.

It’s a really powerful capability.

[ Applause ]

So to summarize what we’ve talked about today: we’ve presented four new techniques.

The first is how you can use Spring Animations in your application.

Think about Spring Animations as a new timing curve for animations within your application that make them fit in better with the rest of the platform and feel more dynamic and engaging for your users.

UIVisualEffectView is a great new API introduced in iOS 8 that will allow you to create live blur effects as well as vibrancy to keep content highly legible on top of blurs.

CAShapeLayer allows you to design custom Bézier paths and draw them within your application and animate changes to those properties, and we’ve talked about how to use CAAction to get more dynamic behaviors out of Core Animation.

If you’d like to learn more about these technologies there’s some great documentation for UIKits and for Core Animation.

You can also talk to Jake, he’s our Apps Frameworks Evangelist.

He has great shoes and he loves getting email from all of you.

There’s some fantastic new sessions going on this week that talk about more of these topics that we have covered.

Many of them happened yesterday and there’s going to be a fantastic session on Friday at 11:30, in Presidio, that I really encourage you to check out.

Thanks for coming and I hope you enjoy the rest of your time at WWDC.

[ Applause ]

Apple, Inc. AAPL
1 Infinite Loop Cupertino CA 95014 US