Storyboards and Controllers on OS X 

Session 212 WWDC 2014

Learn how to build your OS X application using storyboards, view controllers, and gesture recognizers. See how these OS X enhancements can simplify complex user interaction controller code. Gain a practical understanding with a high level introduction to Xcode’s Interface Builder integration and dive deep into a detailed discussion of the new APIs with tips to create great apps for OS X.

Good afternoon.

Welcome to Storyboards and Controllers on OS X.

My name’s Mike, and I work on the Interface Builder team.

Today, I’d like to show you how you can build your OS X applications faster and easier using the all-new Storyboards in Xcode 6 and Yosemite.

I’m also going to be joined later by my colleague Raleigh from the AppKit team, who will come up and take us under the hood of the new controllers API that make Storyboards work.

So just a quick show of hands here, who is already building an app today for iOS using Storyboards?

Oh, awesome, cool.

So you’ll probably notice a little bit of what I’m about to show you is going to look pretty familiar.

So what are we going to see?

First, I’m going to start by giving you a brief overview of how storyboards work.

Then, I’m going to give a demo showing how you can quickly connect the top-level controllers and interface your app together using Xcode 6.

Hopefully, you’ll see how Storyboards can help you structure your app into modular and reusable view controllers.

Then, Raleigh’s going to come up and show us the view controller API.

We’ll see how all of the pieces click together underneath the Storyboard scenes, and he’ll show you the best places to hook into the infrastructure.

And finally, as a bonus, he’s going to show you how to use the new gesture recognizers in OS X without having to write an NS event tracking loop.

All you have to do is just drag, connect to your target in action.

They’re so simple, it’s simply amazing.

So in Xcode, Storyboards are a great way to see all UI parts of your application laid out in a way that’s easy to understand.

Take for example an application like Pages.

We can decompose each of the major pieces of the UI into each major section of functionality, like the window toolbar, the document area, and the shape picker.

Each of these views has a corresponding controller, and we show that in the storyboard as a scene.

Each scene represents a view and view controller pair.

The lines connecting the scenes, known as segues, show the relationships between them.

Some of these relationships represent a containment of view controllers inside of each other.

Other segue lines represent transitions between view controllers.

But each one represents a flow of data in some form within the app.

In your code, view controllers are where you, the developer, actually get your hands into controlling both the UI and the underlying data model.

Each top-level scene in a storyboard represents one view controller in your code.

Each controller is a reusable self-contained module of user interface and control logic.

Within that controller scene, you can do everything you would normally do inside of a ZIB file, like place the UI, set up auto layout constraints, connect outlets and actions to their parent view controllers and its represented object.

So how do these scenes actually connect with each other?

Well, segues are really the glue, or the connective tissues, between each of the major blocks of your apps UI.

You may have noticed that on iOS, segues usually represent a transition between whole full-screen views.

When we took a look at the relationships between view controllers on OS X, though, we noticed a somewhat different relationship: containment.

On OS X, most applications tend to group their views together in the same window.

There aren’t many transitions bringing views on- and off-screen because you actually have the screen real estate to show them all inside the same window.

Some are grouped in splits or sometimes hidden in tabs.

That’s not to say that we don’t have presentation segues on OS X as well.

We just have fewer of them: at modal dialogues, window modal sheets, and popovers.

Like the presentation segues on iOS, in order to make these work it’s your responsibility to override the Prepare for Segue method in your parent view controller.

From there, you can take the passed in and a storyboard segue object, get the destination view controller and configure it and set the represented object, or do any other initialization you need, right before it’s presented up on screen.

So if there’s one line of code that is absolutely necessary and that you need to remember, it’s Prepare for Segue.

So at build time, each storyboard scene is broken down into each of its constituent parts.

Views are separated from view controllers, windows are separated from window controllers, and the transition segues are embedded inside of their parent controllers.

This is done to make the process of loading your user interface as lazy as possible at runtime.

Then, as a performance optimization, the container controllers absorb all of their child controllers together in the same unit.

This is because all of those controllers are going to be needed at the initial load time of the top-level container controller, and it’s just more efficient to unarchive one file.

Then, each unit is compiled into a NIB file, and then all of the NIBs are bundled together inside of a .storyboardc bundle.

Then, that bundle is nestled inside of your Applications Resources Directory, and then Up Your App can go to the store.

So we’ve put a lot of thought and work into the Storyboard Compiler to make sure you get the most efficient loading at runtime while letting you use the most flexible UI to be able to connect and rewire the relationships between your controllers on the canvas, all with the absolute minimum of glue code.

So let’s see how this all works over here on the demo machine.

So as our way of testing storyboards and to make sure that everything that we’re building is actually working, we started by rewriting an old piece of sample code that you may be familiar with called Sketch.

In our storyboard based Sketch: The Next Generation, we’re taking a classic document based app and we’re remodeling much of the control logic into view controllers.

So as you can see here, we have the main storyboard of our application.

It already has a number of scenes that we’ve prepopulated with view controllers, and what we want to do is to show a UI that has our canvas where you can drop shapes and use gestures to manipulate them and then have a split.

And on the other side of that will be the inspector area where you can see the different properties of the currently selected shape.

So to model this, I’m going to use a split view controller, and I’m just going to drag that out into the canvas here.

And it comes with two view controllers already, which I don’t really need because I already have my canvas that I can just wire up here and drag, connect and create a split item out of that.

And then we want to wire up the inspector as well.

And you can see here we have like a little preview of what the canvas and the inspector look like.

And it’s using the auto layout constraints that have already been set in those views.

Also, to get this to show up, we want to take our window controller and connect the new split view controller and say that this is going to be the windows content view controller.

So looks like all of our boxes have all of the lines connected to them.

Let’s run this and see what shows up.

So here is our application, and as I resize it, you’ll notice that, you know, we have a good enforced minimum size from our auto layout constraints, but the split is not sticking kind of to the side where we would want it.

As we resize the window it just keeps getting bigger.

So I know what I can do to fix this.

I just have to go to the split view controller and find the split view item that corresponds to that.

And I can set the holding priority of the split to be just a little bit more than the default.

And while we’re here, we might as well mark it as saying it can collapse because, you know, we might not want to always see the details of all of the shapes on canvas.

So let’s run this and see how it looks.

All right.

Now, as I drag this, we notice that the split view edge is actually hugging the side of the window, as we want it, so let’s try and add a shape.

Oh, and it looks like we don’t have our Shape Add button hooked up, so I’m going to create a new kind of segue.

And I’m going to show the shape picker controller as a popover.

And in order to make this work though, there is one piece of code that we all have to remember to implement in order for these to work is the Prepare for Segue.

So I’m actually going to open up the Assistant Editor and I should see the canvas view controller.

Here we go.

And it looks like somebody has already helpfully written the code for me, so I’m just going to uncomment it here.

And what it does is it takes the destination view controller out of the Passed In and a storyboard segue.

And what we’re doing is we’re initializing it and saying that the shape container is going to be the canvas, which is ourself in this case.

The canvas is initiating the segue and it’s also going to be the destination where we want to add new shapes to it.

So let’s build and run this.

And when we click on it, great.

So now we have a shape.

I can’t seem to do anything with it.

I can’t move it, but I think I’ll leave that for Raleigh.

But as one more little bonus before I leave, I want to hook up what is going to be a Preferences window.

And I just want to show you just how easy this is ’cause this is one of my favorite features.

I just couldn’t wait to show you guys.

Right here we have a TabViewController that has both a General Pref pane and some keyboard shortcuts.

And what I can do is I can wire this up to the Preferences menu item.

And just say for the sake of demo we’re just going to show this as a modal dialogue.

And if I run this, you’ll notice that we have a sketch with fully functional preferences.

And you notice the nice new crossfade between these two views, which comes with the TabViewController.

But this doesn’t look exactly like Preferences windows that you’re used to.

One of the really cool options that have been added to the TabViewController is a new style that, besides just tabs on top or tabs on bottom, we can actually say we want toolbar style tabs.

And if we actually specify the class of our view controllers in this case this is the General Prefs pane view controller, and then this one is the keyboard shortcuts view controller we will find within your document the images that are named the same as the class of the view controller.

So when we run this now, you’ll see a new Preferences window that has the icons and the titles of the child view controllers.

[ Applause ]

So let’s switch back to slides.

So how easy was that?

Just by wiring up a few segues on the canvas and overriding Prepare for Segue, we are able to plumb together the view controllers into a fully functional application.

And that Preferences window didn’t even use a single line of code.

So now I’d like to ask Raleigh to come up and show us how storyboards actually works under the hood.

Thank you, Mike.

[ Applause ]

Now that you’ve seen how incredibly easy it is to design your application using storyboards and to wire everything up, I’m going to pop open the hood and we’re going to take a look at the machinery underneath that drives this whole thing.

We’ve got a lot to cover here.

Just a little bit of API with storyboards itself.

There’s a lot of new API with view controllers.

We’re going to talk a little bit about how window controllers fits in with our proliferation of view controllers that we’re now adding.

And we’ll wrap it up with gesture recognizers and how they really mesh really nicely with view controllers and designing your application.

It’s really awesome.

So let’s dig right in with storyboards.

Storyboards are a resource file in your project like NIBs are.

By default if you do the checkbox when you create a new project, you can start off with a storyboard.

Or if you just create a storyboard in your project, a new one in the settings of your project for your target, you could just right there in the main interface set which storyboard to use as your main interface.

Or even a NIB file, if you happen to be transitioning from an old project to a new one, and the right thing will happen.

If you find yourself with the need to dig down into the Info.plist and set the keys manually, this is the NSMainStoryboardFile.

This is the key you need to set in your Info.plist to bring up the storyboard.

When we load the initial storyboard, it’s going to go through this path just like you can use this API storyboard with name to load any storyboard that you have in your resources, and you could load it from a name.

And then you’d need to instantiate the initial controller.

What’s the initial controller you might ask?

Well, that’s the controller that has the arrow pointing to it without some beginning part to it.

This is the initial guy.

There’s always one of them with your storyboard.

That’s going to be the initial controller, and it’s generally the one you want to load up.

When we load your main storyboard file, we’ll automatically instantiate the initial controller and set it running on your behalf.

Not every controller that you have in your storyboard needs to be wired back up to that initial view controller.

You can have some that just stand alone all on their own and you can give them an identifier.

And you can access them by asking to instantiate the controller with identifier.

We’ll find it in the storyboard with that name.

We’ll instantiate, start doing the wiring up and hand it back to you, then you can take it from there.

So that’s really all there is with storyboards as far as API.

There’s not much there; they just work.

It’s really nice.

With view controllers we’ll move on to view controllers there’s a ton of new API here.

There’s so much new API in view controllers that I’ve got to break it down into sections.

So we’ll talk about loading and layout and the life cycle there.

About containerships; we’ve got some really nice new stuff with containership.

Triggered segues, as you saw with the Plus button to do the popover.

And we’ll wrap it up with how to do things completely manual if you need to get down to that level.

So we have a lot of new properties in NSViewController for loading, display, and layout.

All of these methods here are new except for View Loaded, which is the top one.

And the way I want to talk about them is in the life cycle and how they’re used.

You’ll have your view controller.

It will get initted and allocated.

And it doesn’t have a view yet because view controllers like to lazily load their views, so there’s no view yet.

View loaded is going to return no if you ask for its property at this point.

Then load view is going to get called.

This has generally been the place where you’ve gone ahead and subclassed NSViewController and overrode load view just to call supernormally.

By default, AppKit will go ahead and try and load the NIB with the name that you gave it if you initialized it init with NIB name.

But new on Yosemite is AppKit will go ahead and if your NIB name is nil, will look for a NIB that has the same name as your view controller and automatically load that NIB for you.

So now you don’t have to subclass your view controller to get the right NIB loaded.

You just name your NIB file the right thing and it’ll do it for you automatically.

During load view, set view is going to get called and your view is finally set on your view controller.

View loaded will now return yes.

And then we return from load view, but before we continue the run load, before we go back to giving control to you and your application, view did load is going to be called on your view controller.

It is at this point now instead of overriding load view, you override view did load and you do your one-time instantiation finalization stuff that you need to do there.

So this is something that you only want to do once when the view is loaded and you’ll never need to do again because your view is going to be hanging around.

Now you’re going to go ahead and put your view in a window, get the window on screen and we’re going to go through the life cycle of the view.

So before your view gets in the window as you add it into the window, we’ll call view will appear.

And there are a few other places that view will appear gets called.

Whenever the window is becoming visible for example, as you’re ordering it front, perhaps the user has previously hidden the window or minimized it and now they’re unminimizing the window.

And so this is another case where you’ll get a view will appear.

So you could do some really interesting things here.

For example, you might want to set your highlight color.

So the first time that your view is asked to draw, it draws with the correct highlight color for example.

The animation is now going to occur after you return from this.

The window will be animated on screen.

If you’re unhiding the view or you’re doing some kind of transition, that transition animation occurs.

When that completes and your view is drawn at least once, view did appear will be called in the view controller.

And now you can wire up things that you didn’t want to interfere with that animation.

And you only want it to be visible while the window is visible and your view is displayed.

You might want to set up, just as an example, some animation that’s running here that just runs constantly but only once it’s visible.

At this point, the user interacts with your view, interacts with your user interface.

You’re going to update your constraints on your view, you’re going to dirty them.

Your view controller gets to participate in updating the constraints with Update View Constraints, and you get informed of when layout is going to happen with the will layout and after layout happened if you did layout.

So you can participate in this as you normally would with auto layout, but you get to do it now at the view controller level.

And at some point you’re going to remove the view from the window.

Perhaps you’ve changed tabs and now the view is being hidden or you’ve hiding the window.

The user is hiding the window, or you’re just outright closing the window.

In either of these cases you will be preceded with a view will disappear.

That animation we started in view did appear, this is where we would want to stop it.

Then the animation occurs and the view or the window disappears with the appropriate animation and will call you back with a view did disappear, and now you can release any kind of final resources that you don’t want to hang on to as the window is not visible or your view is no longer part of the window.

Then the life cycle just continues as you switch tabs and the views come back and go away and windows hide and show.

We just continue this cycle.

New in Yosemite, NSViewController is now in the responder chain.

Where if we would start off with our canvas view, as we had in our demo application there, an event is hit tested to the canvas view and flows up through the split view to the window and finally your window controller.

Your view controllers, our canvas view controller and split view controller, can now participate in this.

And they’re automatically wired up in the responder chain right after their view.

So you can now participate in events and more importantly in action methods.

And action methods is where it really comes in handy.

For example, cut, copy, and paste.

In your menu bar, cut, copy, and paste is wired up to the cut, copy, and paste actions to the first responder.

And that might be the canvas view in this case.

Well, our canvas view doesn’t really do much.

Most of our logic is in our canvas view controller.

And previously you would’ve had to catch cut, copy, and paste in your view and somehow get it to your controller.

And now that canvas view controller is in the responder chain, you can put the copy method right in your canvas view controller and canvas view can be totally oblivious to this.

And it works out really nicely.

I do want to point out, though, that we only wire up view controllers automatically in the responder chain if you’re linked on or after 10/10.

So let’s move forward and talk about containment.

We’ve got some really exciting things to do with containment.

View controllers can now have children and you can give back to your parent.

And the best way to talk about containment is with an example.

So let’s start off with tab view controller and let’s dive deep into tab view controller.

So NSTabViewController is going to manage in NSTabView.

And it manages a couple of other views, and I’ll get into details there, but mainly it manages the tab view.

It’s going to lazily load your tab views.

And this is one of my favorite things.

In a past life, I’ve had tab views that had a fair number of tabs and a lot of content in each one of those tabs.

And loading this NIB file from disc took forever.

Now that we lazily load the tab views, just the initial selected tab is going to get loaded when you first bring up your interface.

And it is much more responsive to the user that way; I really love this.

It’s now much easier to customize your tabs using a tab view controller, and I’ll get into specifics about that in a little bit.

And as you saw in the demo it’s incredibly easy.

Without code, you could get a nice preferences user interface style using a tab view controller.

And I’ll show you exactly how you would do that in code.

And it’s just as easy as Mike did it in the demo in IB.

On Mavericks an NSTabView looks like this: There’s an NSTabView.

It manages a collection of NSTabView items.

Those tab view items each have a view and some additional properties, such as the label.

So we know what label to put in the tab.

With tab view controller, logically things are now laid out like this.

There’s an NSTabViewController and the tab view controller owns an NSTabView and manages the collection of NSTabView items.

NSTabView item now has a couple of additional properties: the image, which is kind of fun so you can get an image in there, and of course the view controller.

And it’s the view controller that now owns the view, and it can lazily load the view as you switch to those tabs.

Since NSTabViewController manages a collection of tab view items, you can of course get the tab view item array.

You can also find out what the selected tab view item index is.

And this is KVO-compliant, so you can observe this.

This is much easier to observe the selected tab view item index than it was previously to have a delegate and have to respond to all the delegate methods.

So you don’t have to do that anymore.

As with any sort of containment API, there’s the add, insert, and remove tab view items.

And if you have a child view controller that is in one of the tabs, you can just ask for the tab view item associated with it via tab view item for view controller.

NSTabViewItem also has this really nifty class method on it, Tab View Item With View Controller.

So you could take any generic view controller that you have and you can create a default tab view controller for it using this method.

And what this does is it doesn’t just create a generic tab view item.

It tries to populate it with the appropriate information from the child view controller.

For example, it will take the title of your child view controller and set that as the label of the tab view item.

And we’ll also take it a step further.

As Mike showed in the demo, we look at the class name of the view controller and we try and find imagery sources with that same name or we prefer if it’s the name with a dash tab view item and we’ll automatically set that image as the image as the NSTabView item.

So you can get the label and the item just automatically that way.

TabViewController has some additional properties, which are pretty interesting.

There’s the tab style as Mike showed you.

We have segmented control on top and on bottom.

So the tabs are now drawn with the segmented control.

We’ll touch on that a little bit more real soon.

There’s the toolbar style.

That’s all you have to do.

You set the tab style to toolbar.

The TabViewController will create the toolbar on your behalf and place it in the window as the toolbar; you’re done.

And then there’s unspecified, which is really interesting, and I’ll get to that a little bit more in just a second.

As I talked about, there is a segmented control that now displays the tabs.

You can get the segmented control.

You can supply your own segmented control subclass if you want and draw them in your subclass however you wish.

But if you set the tab style to unspecified, what you can do then is go and apply your own layout constraints to the segmented control and have the segmented control appear where you want it to appear and do the layout yourself.

So you have much more customized ability with tab view controller than you’ve had previously.

There are some transition options.

Automatically by default we set it to crossfade.

Mike showed you this in the demo as he changed tabs in our preferences UI.

It did this nice little crossfade effect.

You can set it to none and have the old style where it just switches.

We have various sliding directions: up, down, left and right.

You can combine directions that are on different axes, so you can combine an up with a left, but you can’t combine an up with a down.

That wouldn’t actually animate anything, so we don’t let you do that.

I do really want to point out, though, slide forward and slide backwards.

These are really important if you’re going to do any sort of horizontal sliding because they work in the appropriate direction for your users language.

So they work correctly in right to left languages.

They will go the right way and they will go the correct way for left to right languages as well.

And you just let AppKit worry about what the right directions are and we’ll handle that for you.

As I mentioned, NSTabViewController manages the tab view item.

In NSTabView it owns a tab view.

So it’s important that you let NSTabViewController manage it for you.

You don’t really need to get to it and change it.

There are a few properties, very few properties, that you can modify directly to get to the tab view.

If you find you really need to do this, look at the NSTabViewController header.

It specifies exactly what you can set, but in general let’s have your controller manage it.

I also want to point out here that as NSTabViewController is a subclass of view controller, it has a view.

The view is not the tab view.

Tab view controller has a view that contains a tab view and a segmented control and perhaps other views as tab view controller needs to manage it.

So let tab view controller handle it for you.

When you are using the toolbar style, NSTabViewController is the delegate of the toolbar.

And you might want to override these methods.

Create a subclass of NSTabViewController and override these delegate methods.

Perhaps you want to do something like add some additional items to the toolbar that aren’t tabs.

For example, a search field, right.

If you want a search field, then your purpose is UI.

Search isn’t a tab, so you will need to override these toolbar delegate methods and insert the search field yourself.

I do want to point out that these have the nice NS required Super attribute on them, so if you do override these methods, you must call Super, and the compiler will help you remember that.

So that’s tab view controller.

Let’s take a look at another example.

We’re introducing split view controller in Yosemite.

It manages it in a split view and it does lazy loading of views.

And it requires auto layout.

It does all of the managing of the splits and how they move with window resizing, all view, auto layout.

It would just start adding constraints to your window.

But we have all migrated our apps forward to using auto layout, so that won’t be a problem, right?

So in a split view controller today on Mavericks in a split view it manages two or more views.

It has a divider in there.

And in a split view controller, logically it’s now laid out like this, in a split view controller owns.

And in a split view it manages a collection of split view items.

Those items have some nice new properties.

And a split view item is a completely new class in Yosemite.

Of course it has a view controller, which will lazily load its view.

And you can set property such as collapse, can’t collapse and the holding priority.

Mike showed you how important the holding priority is.

These were properties that you used to have to override the delegate for split view and be the delegate for the split view controller.

You don’t need to do that anymore.

You can just set these properties right on the split view item.

So it’s a lot easier to do it that way.

And of course you can create a default split view item if you have some child view controller.

Pro Tip: If you want to animate a collapsible split, you use the animator proxy on the split view item.

So you see split view item animator, set collapsed yes or no, and the collapse or expansion animation will happen for you in a nice, animated fashion.

Some additional properties.

Since we manage a collection of split view items of course you can get to the split view items array.

The obligatory add and certain remove split view item, and if you have a child view controller you can find its associated split view item by asking split view item for a view controller.

Now that we’ve taken a look at a couple of examples that we’re doing in AppKit, I want to point out and stress the pattern we’re using here.

And this is the pattern that we think you should use as well if you find the need to write your own containment style of view controllers.

There’s your containment view controller that’s going to manage some collection of items so those items can have the specific information associated with that.

The item will go ahead and own a view controller that will then be able to lazily load the view.

If you’re familiar with iOS, this is a little different than iOS, where iOS you have to have a special type of view controller subclass that then may or may not own a special type of item.

We’ve inverted that on OS X Yosemite, and one of the reasons is that now your child view controllers can be a lot more generic.

And this is the pattern that we’re using on OS X Yosemite and the pattern we suggest you follow as well.

There are some generic container API with view controllers that are exposed at the NSViewController level.

You can get an array of child view controllers and add-ins or remove child view controllers.

But these APIs only manage the collection itself.

It’s up to the subclass to know what to do with them.

Like tab view controller will go ahead and create another tab, but won’t show the views or load the views yet, where split view controller will go ahead and immediately add it as another split.

So it becomes important to do the right thing in your subclass there.

But now, with these generic APIs, you can dig down and look at the child view controllers associated with the view controller.

Or if you have a child view controller, you can go back up and find its parent and as a child view controller you can just call on it, remove from parent view controller, and that’s the easiest way to take apart the parent/child relationship there.

In view controller we have this method transition from view controller to view controller, which will do a nice animated transition on your behalf.

This is what we have with tab view controller so it’s the exact same options.

You can do none with a crossfade or slide in various directions.

And that’s containment in view controllers.

There’s another type of relationship with view controllers and storyboards called triggered segues.

These triggered segues perform a presentation.

And a presentation needs to have an identifier so we can find it in a storyboard if needed manually.

And it’s to have some source view controller.

For example, the canvas view controller in this case is our source to do our shape picker popup.

A destination controller, which is going to be our shape picker controller, and then a style.

And as we did in the demo, the segue style was a popover and this was a segue.

Depending on the style, there might be some other attributes you need to set up.

For example, with a popover you need to set up the anchor view.

Since we had dragged it directly from the Plus button, the anchor view was wired up automatically, but you could actually go in IB and set the anchor view for the popover.

As Mike pointed out, the one line of code that you really need to write in the source view controller so in this case our canvas view controller in your source view controller you need to override Prepare For Segue.

From the segue you can get the identifier, so if you have to deal with more than one segue you can switch on that, get the destination controller and wire up anything that needs to be wired up so that you can have the proper lines of communication or you can get to the right model data objects and show the UI with the correct information.

Often, this is the represented object and you can just set the destination controller that represented object to the right thing.

There are a couple of additional segue APIs that you might find useful: shape perform segue with identifier sender.

This will give you a chance to step in and dynamically prevent a segue from performing if you need to determine that pretty late.

And if you determine that here, we will stop the segue from really getting off the ground.

Then you can call perform segue with identifier, identifier sender and we will find the segue in the storyboard for you and call perform, which will flow back through shape perform segue with identifier and of course call prepare for segue and you could wire up the represented object.

If you get nothing else out of this whole talk today, it’s with your segues you need to implement Prepare For Segue in the source controller, wire up your represented object or whatever pieces of information that you need to bridge between the two, and then let the segue happen.

Using Interface Builder to wire up your segues and do it all visually, it’s the easiest way to go.

It’s our recommended approach.

We can take apart the engine a little bit more and we could look at the manual API that’s going on under the hood that you can also call directly.

So we’re going ahead and we’re exposing this API.

You can ask source view controller to present some other controller as a sheet, as a modal window, or as a popover.

And as the destination controller, the sheet or the modal window, the controller that is being presented, you dismiss yourself by calling dismiss controller and AppKit will do whatever the appropriate thing to do there is and dismiss the controller for you.

This is incredibly powerful because it allows your destination controllers to be a lot more modular and reusable.

For example, when Mike and I were first doing this project in IB, it was like, “Oh,” you know, “sheet’s the first thing in there.

We need to have our picker come down as a sheet.”

And we wired up present view controller as sheet, and that was easy.

Then we said, “No, no, it’ll look much nicer as a popover.

That’d be cool, yeah.”

So the only thing we had to change was the segue type to present it as a popover.

Our destination controller just calls it dismiss controller.

It didn’t need to know how to dismiss a popover or that dismissing a sheet in raw code is actually dramatically different.

Just go ahead and let NSViewController handle that on your behalf and it’ll do the appropriate thing.

We can actually go yet one more level deeper, and there’s at the lowest level present view controller animator.

And there is a view controller presentation animator protocol, and we’re publishing this as well in Yosemite.

So you can create your own presentation animator object, implement the protocol, which is just these two methods, animate presentation of view controller from view controller.

So when you do a presentation, we will take the instance of your animator object, called animate presentation of view controller from view controller, and then you can handle the animation however you want.

Once the user is finished and dismiss controller is called, it will bubble up to the right place.

We will take that same instance that we had before.

We’ve stashed it.

We will call animate dismissal.

The animate dismissal view controller from view controller.

You’re now responsible for doing any dismissal animation however you see fit, and once we return from that we will go ahead and release the animator object.

So that’s view controllers.

There’s a lot of new API in view controllers.

It’s really powerful.

We can’t wait to see what you are going to do with it.

It’s going to be a lot of fun.

So let’s move forward and talk about window controllers a little bit and how they fit in with this large collection of view controllers that we’re going to be adding to our applications.

So let’s just look at some of the new API we have in window controller.

There’s just a little bit.

Namely, there is a content view controller, and this is a requirement.

And when you’re wiring up your storyboard, you need to set up a content view controller to your window controller.

NSWindow also has a content view controller property that’s new on it.

And what happens is when window controller lazily loads the window, it sets the windows content view controller to be the content view controller that the window controller has associated with it.

Of course, like view controller, you can also get the storyboard that loaded the window controller, if one loaded the window controller, or it will return nil if your window controller wasn’t loaded from a storyboard.

And of course window controllers, like view controllers, can be presented and you can call dismiss controller on the window controller and the right dismissal will happen.

NSWindowController also implements the segue protocol so that it can prepare for segue and you can override that.

You can ask it to perform a segue with an identifier, and you can of course override Should Perform Segue to do dynamic suppression in rare instances.

But what really does a window controller responsible for now?

Because we have these view controllers and all your logic is leveling way down in view controller.

So your window controller should just manage the window, you know, things like should I really close when the Close button was pressed?

Manage the titlebar and the toolbar, make sure that has the appropriate look there.

Manage the content view controller.

We’re going to have a content in the window, the window controller’s going to go ahead and manage that.

Window controllers over time has become a place where you put a lot of view logic.

It was a convenient place to put view logic for various parts of your user interface.

But now that we have view controllers and a specified parent/child relationship and a hierarchy here with segues, you can move all of that unrelated logic down to their appropriate view controllers.

And now you have a few controllers which are reusable modules, and they’re more cohesive, independent modules.

And you don’t have unrelated code in your window controller that you don’t need anymore.

So your window controller is a lot smaller and is a lot more readable and maintainable.

So that’s window controllers.

So let’s talk a little bit about gesture recognizers.

I’m really excited about gesture recognizers.

I’m an event guy at heart, so I like to play around with events.

So we’re introducing gesture recognizers to OS X Yosemite.

The API is nearly identical to what it is on iOS.

They’re a lot of fun.

But when you’re looking at it on the desktop, most gestures aren’t determined the same way that they are on iOS.

On iOS, you have to look at the raw touches and figure out what that means.

If you do a pinch gesture on the trackpad on OS X, the trackpad itself recognizes that as a magnification and it sends magnification events.

So you don’t have to figure that out.

But not everything that we can recognize as a gesture is prerecognized for us at the lower levels.

For example, what is a click?

You press on the trackpad, which is going to emulate a mouse event, or you press a mouse button, you get a mouse down event followed by a mouse up event without any drags in between.

So this is some different events, and we can look at them all together and combine them for you and go, “Oh, that’s a click.”

So we can recognize a click.

We can recognize that independently and differently than something that’s a click, a drag, and a release; that’s something else.

NSGestureRecognizer is really good in helping you disambiguate user input.

How good? Well, this is code that you might’ve had to write if you had to figure it out yourself.

This is seven tracking loops deep of [inaudible].

And what this code is trying to do is figure out is this a click, a double click, a triple click, or a drag?

And it’s trying to sort all of this out and all the logic that’s associated with it.

Gesture recognizers clean this all up and make it much more simpler to disambiguate the user input, gives you hooks to tap into there when you need to add some custom logic, and it’s much easier to maintain.

Gesture recognizers are a really good fit for view controllers.

You really don’t want to have to override mouse down in your view controller.

That’s not something your view controller should really be doing.

And when you go to read your code in your view controller, you’re going to see mouse down and you’re going to have to try and figure out with comments or by really looking at the code what is this mouse down trying to do.

View controllers have a target action pair.

It will recognize the full click or the full drag for you, and now you can wire that up to an action in your view controller.

So instead of having mouse down you have Select or you have Drag.

And it’s now obvious in your code.

It’s much more readable on what to do and it’s partitioned out better.

Much more maintainable code.

Here’s an example of using gesture recognizers in place with code.

This is alloc initting a magnification gesture recognizer.

You init the target in the action.

I want to point out a slight difference here from iOS.

On OS X, all of our controls and our menus have a single target action pair.

That’s the pair down we use.

That’s the pair down we’re continuing to use with NSGestureRecognizer.

That’s a little bit different than iOS.

Once you have your gesture recognize alloc initted, you go ahead and set it on some view.

Gesture recognizers need to be attached to a view and recognized for that view.

Optionally, you may need to set the delegate of your gesture recognizer to your view controller.

Perhaps you need to set up some more logic to help disambiguate some things.

Then you implement your action method, Magnify in this case.

The sender of the action method is going to be the gesture recognizer.

And you’ll probably want to switch off the state.

So for Magnify you’ll look at it started, it began, you set up some initial state.

You do your magnification during the changes, and when it finally ends, you go ahead and you commit your changes to your data model.

That way, with this commit, when you do undo, you’ll just get one undo instead of just part of the magnification.

Something like a click, there is another state which is just recognized.

So if there’s not a recognizer that goes through begin, change, ended because it’s just a single collection like a click, you could just check the recognize state.

The gesture recognizer that we’re exposing in Yosemite are the Click gesture recognizer.

This is a mouse down and up.

You can set which button to recognize or even chord of buttons to recognize.

There’s the pan gesture recognizer, which is a mouse button down, drag and release.

And this is very similar to the pan gesture recognizer on iOS, but on iOS, it’s a touch, move and release.

And the equivalent of that on the desktop is obviously the mouse: mouse down, move, release.

And just like click gesture recognizer, you can set which button to recognize or chord of buttons.

There’s a press gesture recognizer, which is holding the mouse button down with a time component.

So a long press is another way that we’ve said it, but an NS Press Gesture Recognizer, click with the time component.

And then we have magnification gesture recognizer and rotation gesture recognizer.

I mentioned earlier that magnification and rotation are already recognized for us at the hardware level.

And that’s true, but they can still be bundled up inside of gesture recognizer so that you can get this nice target action association with it and it continues to fit in the model with these other gesture recognizers in your application so you have a consistent use of gestures inside your app.

And of course you can also set the magnification and rotation gesture recognizers to work at the same time simultaneously, or you can have them where, no, I only want to recognize magnifications or rotations one at a time.

So you can wire those up with your various properties.

I do want to make one quick aside on magnification and rotation gestures.

The raw event, the magnification and rotation events, we’ve added the phase property to those events, so the events also have a begin, changed, and ended phases associated with them.

So if you’re looking at the raw events, you can determine there are phases right there.

Event flow is really important with gesture recognizers.

Let’s look at mouse press for an example.

In Mavericks it would work like this: Mouse press goes through NS Application send event, and if it doesn’t get consumed there, send event will send it off to the event observers, if there are any.

If the event observers don’t consume it, it will flow back up to send event and send event will send it off to the appropriate window.

That window send event method is called.

It looks at the event and figures out via hit testing which view to send it to.

That view gets a mouse down message.

If the view doesn’t consume the mouse down, it bubbles up through the responder chain as normal, and whereas normal now in Yosemite includes your view controllers.

And gesture recognizers are here.

Once the event is sent to the window, NS Windows Send Event, it will do the gesture recognizing.

You might notice we have some dashed arrows now instead of solid arrows, and the reason for that is gesture recognizers might delay events.

Gesture recognizers will take a pan, for example, which is a drag.

You’ll do a mouse down and it goes, well, I’m not sure if they’re dragging yet or not and we don’t want any of your other code to start trying to go into a tracking loop or do anything with the down yet.

We want to wait till we know for sure if it’s a drag or not.

And so your gesture recognizers can delay events.

And if they end up not consuming them, then they will continue to flow through the system as normal.

What’s really important for you to understand as developers here, send event is a place that is often overridden.

If you do override send event in NS Application or NS Window, if you consume events that gesture recognizers have already started to process in that sequence, you could really confuse your gesture recognizer and then your app will not work right at all.

So you need to be careful about that and think about that when you’re using gesture recognizers.

This also goes for event observers.

If you add an event observer, it can consume the events before the gesture recognizer gets to them.

And the gesture recognizer really needs to have all of them that belong in a sequence together.

If you want to subclass gesture recognizer you can.

We fully support that.

Check out the nsgesturerecognizer.h file.

There’s an NS subclass use category.

This is where we’ve put our category for the additional methods that subclassers need to worry about.

And you can write a subclass and recognize a drag wiggle as whatever you want to recognize it to be.

I’m not going to go into details here, so if you want more information, please come see me at one of the labs, and I will be more than happy to talk to you about gesture recognizers.

So let’s put this all together a little bit, and let’s go back to our demo and let’s get that shape to move.

Mike couldn’t get the shape to move, but I think I can.

OK, so we’ll run this again real quick, make sure we’re all right back in the same spot that we were.

We’ve added a shape and sure enough we can’t get it to move.

Well, I know that in the canvas view controller I wrote a select action method here and a drag with gesture recognizer action method here.

So I know that this stuff exists.

They should be getting called, so we’ll need to figure out why they’re not getting called.

And it’s, you know, just like I showed before, you just switch on the state and you do what you need to do there.

So let’s go back into our Storyboard and look at our canvas view controller, and there’s the canvas view controller.

My gesture recognizers aren’t there, so let’s go ahead and add some.

We’ll quickly add a pan gesture recognizer, and we’ll wire the pan gesture recognizer up to the drag.

And we’ll go ahead and add a magnify gesture recognizer.

There you go.

There we are.

We’ll add that and wire that up to magnify.

And now it should be quite obvious what code is going to get run.

We run our app, add a shape, and now we can drag it.

And we can do a pinch and resize the shape and we can add another one.

And they work independently of each other.

And there we go.

[ Applause ]

View controllers backing storyboards, super-easy to design your applications now.

Window controllers can just control your window, but they also have some nice new API to work with storyboards.

Wire up your gestures with gesture recognizers.

It’s easier than ever, it’s a lot of fun.

Everything here is really great.

Can’t wait to see what you do with it.

For more information you could see Jake.

He’s over here up in the front with a nice shirt.

Last night somebody commented on his shoes, so I figured I’d comment on his shirt today.

Check out the “What’s New in OS X” documentation.

We’re working like crazy on the documentation for View Controller, so that is going to be coming.

It’s going to be really nice.

Some related sessions, check out the “What’s New in Interface Builder.”

This and along with other things are going to be covered.

I really want to point out “Creating Modern Cocoa Apps” on Thursday.

If you’re just new to writing apps on Mac OS X, modern Cocoa apps will cover view controllers a little bit, but they’ll cover a lot of other technologies that you should be also implementing in your application as an OS X application.

So thank you.

Enjoy the rest of the conference.

Apple, Inc. AAPL
1 Infinite Loop Cupertino CA 95014 US