Platforms State of the Union 

Session 102 WWDC 2016

WWDC 2016 Platforms State of the Union

[ Music ]

[ Applause ]

Good afternoon.

Welcome to WC 2016.

Quite a bit has happened since we met about a year ago.

With the addition of tvOS, then our four Apple OS platforms, a lot of them with their own app store, and all our platforms are optimized for their own unique experiences.

But they share a tremendous amount of common technologies and APIs, making it easy for you to bring your apps to all four platforms by leveraging the individual character at the same time.

Our expert tool chain is the same for all four platforms.

They share most of their frameworks and libraries, and the underlying programming concepts and programming languages are the same as well.

And today we are announcing a ton of new APIs and technologies that you’ll be able to take advantage of.

You’ll find many more ways to express your ideas, and you’ll be able to target even more users in even more markets.

Our iOS X in particular is huge this year for developers.

In fact, if you take a look at the iOS at least that we announced this morning, you’ll find that we are opening up pretty much the entire user experience of iOS to developers, covering everything from notifications to Phone, Messages, Maps, and even Siri.

An important concept that we use to achieve that is extensions, and you might remember that we introduced extensions two years ago.

They represent an increasingly important mechanism for us because they allow you to branch out from your apps and to participate in our system functionality.

They allow you to securely customize our OS’s by running short-lived sandboxed services that are launched on demand.

Our shipping products already support a broad variety of extension points, and we’re adding many more this year, allowing you to hook even deeper into our OS’s and apps.

Perhaps the most exciting new extension point is for creating iMessage Apps.

And to tell you all about it, I’m going to hand over to Darin Adler.

[ Applause ]

So I’m having a great time with the new Messages, and I hope you all will too.

iMessage Apps are how you can be a part of that.

Now, as Andreas mentioned, iMessage Apps are extensions, and that means that like other extensions, you can include them in your apps on the App Store.

But with iMessage Apps, there’s another option as well.

You can include them in the iMessage App Store, which you get at right from inside Messages.

Looks like this.

Now, when you send interactive messages with your iMessage App, you’ll have the icon of the app up in the corner, but even more importantly, if the person you send that interactive message to doesn’t have the app yet, they’ll have this link that says, “Get a name of app.”

If you tap on that link.

That takes you right to getting or buying the app, so it’s an amazing way to have your customers spread your app from one person to another.

Now, if you’re building a Sticker Pack App, it’s really simple.

There’s no coding required.

You just take all of the graphics for the icons for the app, the graphics for the stickers, put them into Xcode, build, and then submit to iTunes Connect.

If you want to make a more sophisticated app and take advantage of all the power in iMessage Apps, you program with Swift and use UIKit just like with other extensions.

And there’s a new Messages extension point.

The classes in this Messages extension point give you access to everything you need inside the Messages App.

So there’s an object representing a message that you send.

There’s an object representing the whole conversation the message is part of.

There’s even an object representing a thing called a session, which lets you group messages together, and it’s a really great way to do collaborative iMessage Apps.

Now, all of this is done without compromising the privacy that Messages is famous for.

Your app doesn’t have any access to anything in the conversation outside of what’s being done with the app, and if, it doesn’t even necessarily know who’s involved, who you’re sending to, so that helps keep the privacy intact.

Now, adaptive design is important to any iOS app development, and it’s even more important for iMessage Apps.

That’s because Messages runs in all the different kinds of sizes and shapes of devices and all the adaptations.

And so it runs on iPad, it runs on iPhone, it runs in slide over an iPad, portrait and landscape, and so your iMessage Apps need to too.

But there’s one additional wrinkle for iMessage Apps, which is that they run in this Compact Mode down at the bottom of the screen where the keyboard is and you can slide between them or you can take that same app and expand it to its full size.

Sometimes the app will do it.

Sometimes the user will do it.

Adaptive design is very important to that experience as well.

Everything you need to develop iMessage Apps is available, and so you can really start now.

The SDK has all the things I talked about everything from the Messages extension point.

And the simulator even has a new, special version of Messages just for developers that lets you see both sides of a conversation and try out your iMessage Apps to see how they’re both sent and received.

And now Adele Peterson will show you how all of this comes together in iOS X.

[ Applause ]

Thanks, Darin.

As you saw this morning, our friends at Disney have made some awesome stickers, so I’m going to show you how they did it by putting together a sticker app in Xcode.

Now, this only takes a minute, so even though I’m starting a new project, I’m actually almost done.

I’m going to start by selecting the Sticker Pack application template.

I’ll give it a good name like Star Wars, save it, and I’ll select the Stickers Asset Catalog.

Now, I want my stickers to have a great icon when I look at them in the Messages app drawer, so I’ll start by dragging in my icons.

And now I can select my sticker pack and drag my stickers in.

Okay, let’s give that a try.

Now that Messages is in the simulator, it will be easy for you to try out your stickers in iMessage Apps.

Okay, so let’s launch the sticker pack.

I just love this 8-bit droid.

So I can test out sending one and I can even test out peeling and dragging a sticker onto another message.

Looks great.

And that’s how easy it is to make a sticker app.

[ Applause ]

Now onto iMessage Apps.

So I’ve got this ice cream app here, and my daughter’s really into these kinds of apps that let her design and create things, so I’m making an iMessage App for her to build ice cream stickers with her friends.

So here in the Compact View, you have the completed ice cream stickers, and when I tap the plus, the app expands and shows the ice cream building UI.

So I’m going to select a cone here and start this off.

So I’m going to select it and send.

So now the simulator shows both the sender and receiver sides of the conversation, so you don’t even need to use two iOS devices to test your iMessage App.

In this view, I’m actually John Appleseed and I’m sending the cone to Kate Bell.

Let’s look at it from the other side of the conversation.

So here Kate’s received the cone.

You can tap on the message, launch the app, add some scoops, and send it back to John.

And then on the other side of the conversation, you receive this.

You tap the message, you add a topping, and send it back to Kate.

Now, as delicious as this all looks, I don’t really want my entire conversation to be filled with these partially built ice cream cones.

There’s actually a much cleaner way to build this kind of collaborative iMessage App.

If I use the same MS session for each of those messages, then the earlier steps of building ice cream will be replaced by concise descriptions, and you’ll only see the later, the latest ice cream message.

So I’m going to make that change in Xcode and show you how it looks.

So let me switch over to the ice cream project.

Let’s see.

Okay, so here’s the part of my code that creates the MS message object.

I’m going to drag in oops I’m going to drag in this code.

This code looks to see if there’s a selected message in the conversation.

That’s what you get when you tap that message and it launches the app full screen.

So if there is a selected message, we’ll use the session associated with that message, and if there isn’t, we’ll pass in a fresh MS session.

So let’s give that a try.

Okay, so now we’ll launch this app again.

We’ll tap the Plus button, select a cone, send it to Kate, add some toppings, send it back to John, and then finish it off, and send it back to Kate.

So now you can see those descriptions describing the earlier steps of the process, and you no longer have those partially built ice creams sitting on top of the finished product.

And that is a little taste of what you can do with iMessage Apps.

Next up, Robby Walker to tell you about Siri.

[ Applause ]

Thank you, Adele.

Five years ago, we announced Siri, the intelligent assistant for iPhone.

Since then, Siri has spoken to people hundreds of billions of times.

As of today, Siri is now available on five classes of devices and in a whopping 36 locales.

Siri also has many new features and an improved core experience.

One cool example, on iPhone 6s and 6s Plus, you can start talking to Siri the instant you press the Home button with literally zero milliseconds of latency thanks to an amazing collaboration between our hardware and software teams.

But of course, until today, something really important has been missing, and that’s apps, so we’re so excited to launch the first version of SiriKit on iOS.

[ Applause ]

We believe the best experience for people is to use the apps they love, the apps you all have created, and with SiriKit, people will now be able to interact with those applications in a new, conversational way.

I’m going to talk about how your application will work with SiriKit to provide a great, conversational experience.

The first thing Siri does is understand what the user said, taking audio and turning it into text.

Then, Siri understands what the user means, taking that text and turning it into what we call an intent.

Based off that, on that intent, Siri then takes action and provides responses, both visual and verbal.

Your application will provide three things.

The first is vocabulary to aid in Siri’s understanding.

The second is your app logic, your core functionality, and of course, a great user interface.

Now, we designed SiriKit so that Siri handles the conversation and your application handles the functionality.

And what’s great about that is it means it’s incredibly easy to adopt SiriKit and your users can expect a consistent, high-quality experience that’s natural, feels like a conversation and not like a command line.

Let’s dig in a bit.

The first role of your application is to provide vocabulary, and there’s two kinds.

The first is app vocabulary.

These are terms that would be known to any user of your application.

Things like UberX or Pinboard.

The other is user vocabulary.

These are terms that are specific and important to an individual user of your application.

Things like their contact names or names of photo albums.

The main role of your application is to provide your app logic, and this comes in an extension at three key moments.

The first is to help Siri understand the parameters of the user’s intent.

The second is to help Siri show the user what will happen if and when they confirm their intent.

And the third is to actually of course handle the intent, to accomplish what the user came to accomplish.

And during the conversation, Siri presents visuals to the user, and you can optionally provide a second extension to customize these so that using Siri with your application still feels like using your application anywhere else.

So let’s take a look at a messaging app integration called Hologram.

Hologram is the number one app for sending messages in a galaxy far, far away.

So imagine someone says to Siri, “Send a hologram to Obi-Wan saying, you’re my only hope.”

Now, it’s Siri’s job to take that audio and turn it into text, but Siri needs help from your application.

Siri doesn’t know on its own that Obi-Wan is an important user or important person in your user’s life, and so by providing that piece of vocabulary to Siri, you make sure that Siri understands what your user said.

Next, Siri works to understand what the user is trying to do.

In this case, send a very important message.

And Siri will also work to understand parameters, like the recipient and the content.

Siri packages all of this information into a structured object a nice, simple object.

Your application doesn’t have to worry about the countless ways someone could express this same idea to Siri, whether it be different phrasings, multi-step interactions.

All of that’s handled and all you have to worry about is this really simple object.

SiriKit will then hand this object to your extension to help with parameter resolution.

For example, maybe your application knows that the user said “Obi-Wan,” but they usually refer to this person as Old Ben Kenobi, and you can instruct SiriKit to update the intent.

Then comes the big event.

It’s time to actually handle the user’s intent, and SiriKit will again hand this object to your extension for processing.

In this case, you’re going to send the message, you’re going to get in an escape pod, roll over some sand dunes, deal with some Jawas.

You’re going to get the job done for your users.

And along the way, Siri will provide a default user interface for this interaction.

And if you want, you can also bring your application’s experience into Siri so that it feels more familiar to your users.

And that’s it.

Those are the three things that your application has to do because SiriKit handles the conversation.

And handling the conversation actually means a lot.

Siri’s behavior is different depending on how someone starts a conversation with Siri.

So if you’re holding your phone, pressing the Home button, looking at your screen, Siri will provide more visual responses and say less things out loud.

But if you say, “Hey, Siri,” or if you’re in the car using CarPlay, then Siri’s going to show a lot more and, or sorry, say a lot more and show a lot less.

SiriKit is powered by extensions and NSUserActivity, the same technologies that power an increasing number of OS integrations, and you’ll hear a lot about them this week.

This year, SiriKit will connect to applications in six domains, and in each of these domains, there may be more than one intent to provide a complete, conversational experience.

For example, in Messaging, you can send messages or search them, and in Payments, you can request payment or send payment.

And SiriKit will be available in all Siri languages, and this is a big deal because when I said earlier that Siri handles the conversation, what I actually mean is Siri handles the conversation in all 36 locales so that your application doesn’t have to worry about that.

[ Applause ]

We’re so excited to see what you’ll build and so is someone else we know.

Siri, say hello to apps.

I’m swiftly becoming friends with them.

And on that note, here is Chris Lattner to talk about Swift.

[ Applause ]

All right.

Thanks, Robby.

Let’s dive in to what’s new with Swift?

Now, it’s easy to forget that we launched Swift and released it less than two years ago, and in that short time, you’ve built and submitted to the App Store over 100,000 apps, including well-known titles like these.

Now, Swift has also been very popular with the enterprise as well.

Just as a simple example, IBM’s already built and deployed hundreds of apps written in Swift.

Now [ Applause ]

Now, there’s lots of reasons that people love Swift, one of which is that we open sourced it less than six months ago.

[ Applause ]

And in that time, it’s become the number one most downloaded language project on GitHub, the number one most watched, and the number one most favorited, and it’s stayed on top.

I think to me, even better than this is that we’ve had a ton of new people get involved with the project.

We now have hundreds of new contributors contributing through open source, and we’ve processed thousands of poll requests.

The response has just been phenomenal.

Now, one of the reasons that Swift being open is really important to us is that we want to see it go everywhere.

For example, we think that Swift is awesome for the server, and so we ported it to Linux.

And the community agrees.

In a very short period of time, they’ve started bringing it to other popular platforms like FreeBSD, Android, and even Windows.

[ Applause ]

Now, to help Swift get to all of these platforms, we started the Swift Package Manager.

The Package Manager’s a great way to build, share, and reuse cross-platform packages.

It can generate an Xcode project file and even compile projects natively on platforms like Linux.

Another great thing about Swift Open Source is you can get involved with the design of the language itself.

Swift-evolution has been an amazing experience with intense interest in a completely insane number of emails on mailing lists.

It’s ridiculous.

Now, we published an open roadmap with the goals for each release, and we solicit ideas and discuss different directions we could go to push the language forward together.

We then debate these together openly as a community and we turn them into formal proposals.

So far, we’ve had over 100 proposals to move Swift forward.

This is only in six months, and this is just an amazing sign of how fast Swift moves.

But it also shows another important and interesting point.

Despite its wide use, Swift is still a relatively new programming language.

Now, as you’ve seen I think, we’ve chosen to quickly identify issues in the Swift language and fix them because we don’t want to be stuck with issues forever.

We are building Swift as the next great programming language, and so we want it to be great for the decades to come.

And now the problem with this is that for some developers, the programming language changing out from underneath you can be concerning, and as Swift goes to new platforms and new kinds of users with Swift Playgrounds, this becomes an even bigger concern.

So with that as context, let’s dive into what’s new with Swift 3.

Swift 3. We announced it in December as part of the open source launch of Swift, and we’ve been developing it completely in the open.

The number one feature and the number one goal of Swift 3 is to get these early growing pains over with and turn Swift into a stable and mature base that we can keep compatible with future versions of the language.


[ Applause ]

So because of that, we’re focusing on the core essentials of the language and making the tools and the development experience really great.

And there are a ton of different ways that you can see this in Swift.

To give you a simple example, one common complaint about Swift 2 is that the feel of some APIs in Cocoa didn’t feel natural in Swift.

Swift loves clarity and it aims to define away boilerplate.

On Swift 3, Cocoa APIs have an elegant feel.

You could say they’re totally Swifty.

Now, we did this with a number of different initiatives.

First, we sat down and thought hard about what really makes a great Swift API, and we wrote it down in a document that’s now available on

We then took that and took those rules and built them right into the Swift recompiler, so it automatically applies them to Objective-C APIs as it imports them into Swift.

But Swift goes far beyond just naming.

Naming’s a pretty hard problem, though, but it goes beyond just naming.

And if you look at foundation as a simple example, you’ll find entirely new Swift native data types.

Date is an example of this, and if you compare Date to NSDate, you’ll find that it provides proper value semantics, it’s about twice as fast to pass around, and it’s about 40 times faster to change due to reduced MalCon [phonetic] free traffic.

[ Applause ]

Now, there are examples everywhere, and as Calendar becomes Calendar, Global Constants become scoped enums, Date Components, NSDate Components becomes a proper value type, and everything is just feeling so Swift.

If you move beyond foundation, Dispatch is another critical API that we work with all of the time, but it provides this low-level, C-style interface.

With Swift 3, Dispatch has gotten a major overhaul, has a beautiful object-oriented API.

[ Applause ]

And it follows all the best naming practices that you expect from a great Swift API.

Core graphics is another example of that.

Here’s some typical core graphics code, and in Swift 3, it’s elegant, beautiful, and it works just like you expect.

[ Applause ]

Now, Swift 3 has a ton of great features that you can learn about all week long.

Swift 3 is available in Xcode 8, and Xcode provides a fantastic migration assistant to help move your code from Swift 2 to Swift 3 syntax.

Even better, Xcode 8 also includes Swift 2.3, which means that you can move to Swift 3 syntax when the time is right for you.

[ Applause ]

So that’s all I have for you today.

Thank you, and I’ll hand it back to Andreas to talk about the next big thing, Swift on the iPad.

[ Applause ]

Thank you, Chris.

So another aspect of Swift that excites us is that it’s so simple and so approachable that it’s not just great for implementing apps and server components.

It’s also great as a first program language to learn.

In fact, we think this is super important, and when we designed Swift, that’s also an explicit goal we had.

And right from the launch of Swift, we introduced Xcode Playgrounds, an interactive environment where you can quickly iterate on your code.

And while it’s great for experienced developers using our Xcode IDE, and I want to dramatically expand our focus and include kits for just beginning to learn how to code.

So today we are announcing Swift Playgrounds, a new app for the iPad.

You’ve already seen a demo this morning with the keynote.

It is paired with Xcode technology, but it’s made from the ground up for learning and teaching how to program Swift.

It’s both playful and engaging for the younger audience, and it’s a fun way of trying out new things for experienced developers.

It is designed to work with the touch interface of the iPad.

We created a new Smart Keyboard that brings you quick type suggestions for your code, similar to code completion in Xcode.

You often can write entire lines of code without having to bring up a full keyboard.

When you edit inline values such as numbers and colors, we pop up these quick editing controls that allow you to easily select a value, again, without having to bring up a full keyboard.

You can quickly change the code in your Playground simply by dragging the structural elements on the screen with your finger.

And as you would expect, there’s a library of prebuilt code snippets you can insert into your code simply with a tap or again by dragging them with your finger.

And for the times where you do bring up a full keyboard to write some code, we present you with a dedicated coding keyboard, which makes it super easy to access the many special characters and numbers that you need for writing Swift code.

Just with a swipe of a finger.

No need to switch keyboard planes.

The app also ships the fantastic lesson content.

We’re building an entire series of lessons that will introduce you to programming Swift step by step and we’re planning to rapidly expand this content over the course of the next year, but you can go in to provide learners with a steam of challenges that they’re going to update frequently.

So they keep coming back to the app and stay engaged with the learning process for the number of deep dive topics.

But you’re not limited to following this guided lesson content.

You also have the power and the flexibility to explore coding on your own with a set of simple templates.

We created one that makes it easy to put together a program with just text input and output and another one that allows you to explore and visualize graphics concepts based on shapes.

And beyond that, since you have the entire iOS SDK at your fingertips, you’re free to create pretty much any Playground you’d like and use the app to teach many topics.

You can even control robots like this one here on the screen and other accessories from your Playground, so it ends up being a lot of fun.

And to just show you a little bit more about how you can leverage the app, I’m going to ask Ken Orr to give us another demo.

[ Applause ]

Right. Thanks, Andreas.

Swift Playgrounds makes it easy to get creative through code, so let’s take a look at a Playground that I created earlier and I’ll show you just what I mean.

Now, I started here with the Shapes template, and you can see the beginnings of my picture over there on the right.

So just some squares of different sizes and colors rotated around the center.

Now, everything over on the right is a product of my code on the left.

At the heart of my code over here is this for loop.

That’s where I create each one of those squares.

And then up at the very top, I’ve got a variable that defines how many squares I should create.

Well, right now, that’s set to 15, but I’d really like to get that entire right-hand side to be filled up with color, so to do that, let me just try bumping this up to 80, and then I’ll tap Run.

Nice. And I think maybe instead of maroon, I’ll use a nice blue.

That looks good.

And I really like the pattern that I’ve got going on here where the squares, they’re kind of spiraling down into the center of the Canvas.

That looks really cool.

I bet I could make that even more obvious if I change the size of those squares more dramatically over time.

So back down in the for loop here where I calculate the size, I’m going to tap on the plus between these two different expressions.

Rather than just adding them together, why don’t I try multiplication?

Run that. That looks good.

So you can start to get an idea of just how easy it is to explore and experiment with code using just a touch.

So next, I’d like to bring a little bit of life to my picture.

I’ll use some animation for that.

Now, to do that, I’m going to use some API that’s built in to the Shapes template, and I think what I want is I want the squares to pop and spin in from the center of the stream.

So I’m going to tap right here after I create each square, and then down in the shortcuts bar, I’m going to scroll over and I’m going to tap on the Animate function.

So I’ll have an animation that’s maybe three seconds long and I’ll have it wait for just one second before it starts, and then everything that I put inside of that block there.

It’s going to be automatically animated for me, so I’m going to pull in the rotation like that.

And then I also want the size of the square to animate, so I think I’ll start it out at zero width and zero height, and then back in the Animate block, I’ll set it to what it was before.

So I’ll say square.size equals size, and then I already calculated the size above, so I’ll just use that same variable, and I’ll tap Run.

Very cool.

So it’s starting to come to life.

Thank you.

[ Applause ]

And there’s one more thing that I’d like to do here.

I’d like to add a little bit of touch handling.

And to do that, I’m going to use some more API that’s built into the Shapes template.

And down here at the bottom, I’ve actually already added a drag handler to the Canvas.

The block’s waiting for me to fill it in, and I’ve also written a function here that will rotate each square around the center of the Canvas.

So I just need to call that function in this block here.

So back in the shortcuts bar, I’m going to say squares.rotateforTouches and then I’ll tap Run.

And now I’m going to tap my finger and hold in the top right, and I’m going to pull down, and just like that, I’ve added Touch Handling.

Thank you.

[ Applause ]

By the way, you may have noticed that so far, I haven’t actually needed the full keyboard for any of this, and I think that’s pretty cool.

There’s one more thing that I’d like to show you this afternoon.

So I’m a UI guy.

I love building custom controls and playful UI, and I’ve been tinkering around with this custom color picker here.

I’d love to show you that.

So when you tap on the color swatch, the Color Chooser pops out from underneath your finger and you can drag around and pick the color you want.

Now, all the code to build that is over on the left.

So let me make the code bigger so we can take a quick look.

Now, the first thing you’ll notice, up at the very top, I’m importing UIKit, so I have access to the iOS SDK in this Playground and in any Playground that I create.

And then all the way down at the bottom here, this is where I tell Swift Playgrounds to take my view and show it over on the right-hand side.

There’s one last thing that I’d like to add to my Playground here, and that’s I want to add something that I can set the color of so that I can try out my Color Chooser.

So I think to do that, I’ll just add a simple UI Image View, so let me bring up the coding keyboard, and I’m going to say, “Let image view equal UI Image View.”

And I’m going to use the initializer that takes an image, and then I’m going to tap on the third item in the shortcut bar, that little picture.

That’s an image literal.

So when I tap on it in my source code, I can choose from resources that have been added to this Playground.

[ Applause ]

So I added the Swift bird earlier, so I’m going to choose that.

And then last thing I need to do here, I need to get you, the UI Image View into the view hierarchy, so I’ll just say viewController.view.addSubview, toss in the image view.

We’ll run that, then I’m going to make the view full screen.

And know we can see I’ve got my image, I’ve got our color picker, and I actually snuck in one little other feature here.

I want it to be a little bit more fun, set the color of the bird, figured hey, why not do it that way?

[ Applause ]

And that’s Swift Playgrounds.

Thank you.


[ Applause ]

Thank you, Ken.

So as you saw, no matter what kind of Playground you create, you always do so by leveraging the actual iOS SDK, including the APIs that give you access to the device’s hardware.

And by writing real Swift code, this is perhaps the biggest differentiator to other learning apps that often use a limited approach that users quickly outgrow.

In Swift Playgrounds, you’ll always learn how to write real Swift code.

And by the way, as a side note, the entire Playgrounds app itself is also written in Swift.

[ Applause ]

So Swift Playgrounds offers you many ways to experiment and to teach and to learn how to code, and we are very passionate about letting as many users as possible, especially kids, take advantage of this new opportunity.

And in fact, we hope that you will help us create an entire platform for learning by creating additional content.

And to get you started, we’re going to make documentation on our Playground’s file format available on our website today.

And later this year, we are planning to publish our lesson materials, including the rich 3D Puzzle World under a license that will let you copy and reuse our materials.

Incorporate your own ideas, and help us reach many audiences around the world.

So once a user has created a Playground, there’s numerous ways for sharing the results.

Playgrounds are simply documents so you can share them in all the usual ways that you’re familiar with.

And we didn’t stop there.

We made it easier to take pictures of your program’s output and even incorporated replay kits so that you can record a coding session and publish your work as a video.

And for [ Applause ]

And for more advanced learners, the Swift Playgrounds app on the iPad and Xcode on the Mac complement each other nicely.

You can move Playgrounds back and forth between the two environments that eventually transition into developing full apps in the Xcode IDE.

And so that brings us nicely to the next topic.

[ Applause ]

To tell you more about Xcode, I’m going to hand over to Matthew Furlich [assumed spelling].

[ Applause ]

Thank you, Andreas.

Xcode 8 is a big release with much to talk about, so let’s just jump right in and start talking about our Source Editor.

For this release, we focused on adding the most highly requested features from all of you.

We started by adding in active line highlighting, and you can edit this color.

It’d work beautifully with any Editor theme.

[ Applause ]

It gets better [laughter].

We also did a support for Swift color literals so you can now view and define colors right in your source.

[ Applause ]

Let’s go for three.

We added in Swift image literals too so you can now view images in your source and as results for code completion.

And to help you write better documentation, the editor has, now has a command to generate mark-up for documenting your APIs.

[ Applause ]

So these features and a number of usability improvements really make Xcode’s editing experience awesome, but we didn’t want to stop there, so we’ve added one more feature, and it turns out to be your number one request.

App extensions.

[ Applause ]

With Xcode 8, we are opening up the IDE.

We’re starting with source editing.

This extension works with the active editor to enable transformations, changes in selection, and navigation within the file.

And this opens up worlds of opportunities for commands such as reformatting, commenting, localizations, even to-do items.

Now, you can implement many actions in each extension, and each action is listed as part of the Editor menu.

Users can set key buttons for these actions too to enable them as part of their workforce.

Now, as developers of extensions [ Applause ]

As developers of extensions, you can ship them through the App Store or deploy with Developer ID, and these extensions work in Xcode 8 on both Sierra and El Capitan.

[ Applause ]

Now, as a standard extension, users can feel safe too.

These extensions are run as a separate process and have their signatures verified.

And to further reinforce safety, Xcode is now secured by system integrity protection.

Which means only trusted libraries and extensions can interact with the IDE and your code.

So the new source editing extension is our first step towards making Xcode an even more extensible IDE.

We’d love to hear your feedback on it and other extensions you’d like to see.

Now, we’ve also been working on some improvements to that way you learn about code, and for that, we have a brand new API reference experience.

We’ve merged together the API documentation for our four platforms into a single, unified reference.

Now, this reference makes navigation easy, working through frameworks and symbols.

Now, searching the unified reference means you see a single result for each API and we’ve integrated the same fuzzy matching we use for code completion and open quickly.

When browsing the reference, the platform availabilities are clearly listed for each API.

And in previous releases, we ship this as a separate download because of its size.

With this release, we’ve been able to shrink it to almost one-tenth to what it was previously, so now we’ve included it by default, so you’ll always have the information you need.

[ Applause ]

Now, we also have some great, new improvements to Interface Builder, and I’d like to show them to you in a demo.

So here we have a project.

We’re calling this Trailblazer.

It’s a social application for sharing your favorite hiking trails.

Now, in addition to the way that your interface is laid out, you’ll now notice that Interface Builder is showing you the way it’s going to render on a device.

Interface Builder now shows you visual effects like vibrancy, blurs, and shadows right on the Canvas, making a seamless experience for designing and running your app.

You’ll also notice that we’re displaying the interface in the dimensions of a device, and there’s a new configuration bar along the bottom with common device sizes.

Here we’re looking at it in an iPhone 6s Plus.

I can select one of the other items, like let me click on one of the iPads, and I can see how my interface will be displayed there.

[ Applause ]

Here you see we’ve designed an adaptive layout using two columns on iPad.

To view it in other ways, I can change the orientation if I want to view it in landscape.

I can also view it in one of the adaptations.

For example, let’s look at it in one-third size, which is used for slide over as for multitasking.

This is great because it really allows me to ensure I’ve implemented the right interface for all the ways users will experience my app.

Now, we’ve also improved the experience for creating adaptive layouts using size classes.

Let me give you an example.

The designers of this app wanted to put an image, a button on top of the image to show the hiking trails overlaid on a map.

But because of the size of the map, they only want to do it on wider displays.

That’s really easy.

In the configuration bar, I’ll click the Vary for Traits button and I’m presented with the two size class options, Width and height.

iPhones and iPads differ on the width size class, so I’ll check that option.

The configuration bar turns blue to remind me that I’m making customizations for a specific size class.

But you’ll also note the list of devices have changed to show me those devices which will take advantage of that customization.

Here I can see various iPads and orientations and adaptations.

The last item, though, is the iPhone 6s Plus in landscape.

That device and orientation makes use of the regular width size class, and this is great information because now I can ensure I’m designing the interface for the devices I intend.

I think our interface will look great there, so let’s go ahead and make those changes.

I’ll go into the library and we’ll grab a button out and drag it into the interface.

If your hands weren’t so sweaty, you could grab the button [laughter].

Come on.

Nobody look for a second.

This is not going to be funny in a second.

[ Applause ]

There we go.

[ Applause ]

That should not be the biggest applause.

Okay [laughter].

So let’s continue on.

I’m going to delete the title from this, and we’ll go ahead and set an image for it.

We’ll make it this nice compass icon we’re given and we’ll just place it correctly.

Now, I will connect up this button a little bit later, but let’s make sure we got the interface we wanted.

I’ll click Done in the configuration bar and we’ll switch back to the iPhone 6s Plus in landscape.

Our button appears.

When I switch back to portrait, it does not, and that’s exactly the interface I wanted.

So Interface Builder now makes it really easy to design adaptive layouts.

The configuration bar is going to show devices for iOS and watchOS.

It also shows some helpful options for tvOS.

I’ll bring up the tvOS version of our application, and you’ll see the configuration bar now shows you options for the light interface and the dark interface.

And I can even make customizations here too.

For example, let’s add a specialization of the compass icon for dark mode so that it pops out a little bit better.

Now, there’s one more feature for Interface Builder I’d like to show you.

You’ll see that the tvOS Storyboard is zoomed out so it fits on the entirety of the Canvas.

Interface Builder now supports Canvas operations at any zoom level.

[ Applause ]

So whether you want to zoom all the way in, get pixel perfect alignments, or you want to zoom way back out and work at an overview, you can do it all.

[ Applause ]

And that’s what we have new for Interface Builder.

[ Applause ]

Thank you.

Designing in Interface Builder now feels like working directly on a device, and editing at any zoom level means you’re no longer constrained by how big or small those devices really are.

For this release, we’ve also focused on the accessibility of our tools, and in particular, we’ve made Interface Builder work great with voiceover.

We’ve also completely rewritten our AppleScript Dictionary, making Xcode much easier to integrate with desktop automation.

[ Applause ]

Now, with each release, we add additional support for finding, diagnosing, and fixing issues.

And with Xcode 8, we are taking another big step forward.

Over the last year, we’ve added over 100 new diagnostics, which provide great insight as you build and as you edit your code with live issues.

We’ve added three new static analyzers for localization, malleability, and deallocation.

All common patterns that can cause issues in your apps.

And we’ve continued to invest in our testing system, improving stability, performance, and adding some new options.

Xcode will now capture and display logs for crashes that occur during your testing.

[ Applause ]

This is a great option because you can run your tests, collect those logs, and just like crash logs, you can view them right in the debugger to diagnose the issues.

We’ve also enhanced xcodebuild with a new option to run prebuilt tests.

[ Applause ]

Woo hoo, indeed.

It means this is perfect for integrating scalable testing with your own continuous integration.

So now more than ever, Xcode will help you with issues as you build, analyze, and test your apps.

But oftentimes some of the most interesting, if not let’s say diabolical, issues happen when you run your code.

For that, we’re introducing something new called Runtime Issues.

[ Applause ]

Runtime Issues are like our others.

You’re alerted to them in the Activity view, and you’ll see details about them in the Issue Navigator.

We’ve added a toggle at the top to highlight them.

Now, there are three kinds of runtime issues; UI, threads, and memory.

Let’s start with UI.

The View Debugger is already a great tool for visualizing and diagnosing problems with your interface.

In addition to improved accuracy and visual rendering, the View Debugger will now detect layout issues at runtime.

With each capture, the View Debugger is able to detect views with ambiguous layouts, and these are caused by missing and misconfigured constraints.

[ Applause ]

These issues are surfaced right in the Navigator and the Inspector provides details to help you fix them.

Now, threading issues are often unpredictable and can be difficult to debug, and there are many potential causes.

Things like unlocking from the wrong thread or data races.

To help you track down these kinds of issues, we’re integrating Thread Sanitizer into Xcode 8.

When enabled for you application, the Thread Sanitizer is able to detect common threading problems and will surface them as runtime issues.

You can have Xcode break on these issues as they occur or you can collect them all and review them at the end of your session.

And the integrated report provides a breakdown of these issues, giving you details about any race conditions and giving you stack frames to help you navigate.

Like yes.

[ Applause ]

Now, like threading, memory issues are often challenging to identify and fix.

And to debug them effectively, you often want to view a graph of your objects and see how they’re all interconnected, so that’s what we built [laughter].

New in Xcode 8 is a memory debugger available in the Debug bar here, which will help you visualize and navigate the object graph for your running application.

[ Applause ]

Will you like it better if I tell you that it automatically finds leaks too?

[ Applause ]

That’s good because this is an amazing new tool for debugging memory issues, and I’d like to show it to you in a demo.

Okay, so here we have the Trailblazer application.

I’m going to launch it in the simulator.

Now, I was debugging this earlier and I noticed some memory issues I wanted to look at.

I’ll bring up the memory report while we’re doing this and I’ll click on the trail.

You’ll see that memory spikes.

That one’s okay.

I investigated that early, and that’s just from loading all the assets for the trail.

What I noticed, though, was as I clicked on each review, our memory spikes, and we don’t reclaim that memory, even if we go back all the way to the beginning of our application.

This is generally indicative of a memory management problem and something we can use the Memory Debugger to investigate, so I’ll click the Memory Debugger button in the bar here, and as we pause your application, we capture a graph of the objects.

The debug navigator now shows me all the objects allocated in my application, both those I created and the ones the system created for me.

I can filter this down to only showing the items from my project.

I can also type in a string or an address to look for specific objects.

Here I’ve typed in “controller” and I see I have three instances of the Comment View Controller around.

That’s surprising to me.

When I select any one of them, we’ll see a graph for this object on the right.

Now, what you’re seeing here are all the reference paths to this object that are keeping it around in memory.

One of these objects I see here is a Swift capture context.

This is somewhere in code that my View Controller has been captured as part of a closure.

That’s a good place to start looking.

I’ll bring up the Inspector to look at more details, and one of the details we show is the back trace to where that capture happened.

And of course, I can just click to navigate directly to the line of code that caused it.

[ Applause ]

And I can see the source of my problem.

I’ve set up an observer for this View Controller to be told when the rating changes.

This API returns an observation token that I see I’ve properly cleared up down here when the view goes away, but I never retained it in the first place.

That’s a common and simple mistake to make.

It’s also one the Memory Debugger makes really easy to find and fix.

Now, I mentioned before the Memory Debugger also finds leaks, and it’s alerting me to three I have up here in the Activity view.

I’ll click on that and be taken to the Issue Navigator where I see three types of objects I’m leaking.

An array, user review, and user objects.

I’ll click on one of the reviews, I’m sorry, the users, and now we see the reference cycle.

A user has an array of written reviews, and each one of those user reviews has a reference back to that user.

If all these references are strongly held, this will leak all over those objects.

Now, it looks like I’m leaking them all uniformly.

If I wanted more details though on any one of these objects, I can use the context menu to print something right to the console or just bring up a quick look to see more details.

Now, in this case, the reviewing user relationship is the one I want to investigate, and we’ll just navigate just like code.

I’ll just command click on it and be taken right to the line of code where that reference came from.

And sure enough, I forgot to declare that as weak.

So just like that, the Memory Debugger was able to show me the leaks I had and help me fix them.

[ Applause ]

So that’s the new Memory Debugger in Xcode 8.

Now, there is one other kind of issue I’d like to talk about today, and that’s with provisioning.

[ Applause ]

There is nothing more frustrating when you’re working on your projects than having issues with code signing.

Well, actually, that turns out to not quite be true because in some cases, the solution was more frustrating than the problem itself.

[ Laughter & Applause ]

So we addressed this in Xcode 8 and we’ve completely rebuilt our provisioning system.

[ Applause ]

We started by creating new signing actions that are faster, more robust, and built with new Xcode workflows in mind.

We generated a new user interface elements to clearly show you the profile, team, and certificate you’re using and clearly show you any issues.

We refined our messages to ensure they always included actionable information and we also were ensuring to generate a log of the requests and the results to be transparent about what’s occurring.

Based on these, we had a strong foundation on which to provide two new provisioning workflows.

Xcode 8 has an option for automatic code signing.

With this code signing option, Xcode takes care of all of the details using a dedicated profile, and this profile is separate from any that you create or manage.

We’ll take care of all the signing requests for adding entitlements and regenerating new items.

For cases where you would like more control of your signing setup, you can disable this option and use Customized Code Signing.

[ Applause ]

With Customized Code Signing, you can specify the exact signing assets you would like to use and you can specify them per build configuration, which is a great option when you have a variety of signing needs.

[ Applause ]

Customized Code Signing still takes advantage of our new foundations to give you great feedback and assistance for any issues.

And there’s one other area of provisioning that we wanted to focus on, which was development certificates.

To make development easier when working with many machines, Xcode now supports multiple development certificates.

[ Applause ]

This means when you get a new Mac, you just add your Apple ID and that’s it.

You no longer need to revoke or share a certificate from any of your other development.

[ Applause ]

So the new provisioning system, automatic and customized signing and multiple development certificates.

Always that Xcode 8 makes provisioning easy and gives you the control you need.

And in this release, we focused on performance, and we have some great achievements for you.

Compared to the release we shipped just a year ago, you’ll find Xcode launches twice as fast all the way up to being 50 times faster at the indexing of tests.

[ Applause ]

These improvements all add up to make Xcode 8 something that is fast and fun to use.

So these have been some of the many features and enhancements you’ll find in Xcode 8.

Please come by the labs this week and let us know what you think.

Next, I’d like to invite up Sebastien Marineau-Mes, who’s going to give you some information about exciting, new platform technologies.

Sebastien [applause]?

Thank you, Matthew.

Thank you.

Let me now give an update on a number of key, foundational technologies, and I’m going to start with compression.

Now, if you recall, last year, we introduced lzfse as our new mainstream compression.

At three times the speed of zlib and less than half the energy consumption, it’s a really compelling technology and one that’s seen great adoption.

Now, today we’re announcing that we’re open-sourcing lzfse, and we [applause] There you go.

And we believe that this will encourage further adoption, especially for multiplatform and backhand offline compression use cases.

Next, I’d like to talk about networking.

Now, we know that the performance of some applications really depends on having a great network connection, but today’s networks don’t really have a great way to prioritize the traffic that’s most important.

For example, people downloading YouTube cat videos at work can interfere with your really important video conference.

And so in working with Cisco, we’ve added intelligence to the network, and what the network is now able to do is identify those trusted devices.

Identify those applications that are most important to your business, and then prioritize that traffic end to end throughout the network.

Which gives you much better performance for those applications that are most important to you.

That is networking.

Next [ Applause ]

Next, let’s talk about logging.

Now, logging is a technology that all of you use during development, debugging, and for field diagnostics.

Traditionally, logging has been very fragmented.

A number of you roll your own solutions.

The solutions that are available in the platform are often slow, and so this year, we’ve set out to rethink logging, and we’ve come up with a technology that we think is very compelling.

It’s unified.

It’s extremely fast.

It’s very compact on how it stores data on disc.

It also gives you enough flexibility to support logging across applications, daemons, system services.

It has this concept of in-memory tracing where you can capture very high-frequency log messages that only get persisted to disc when your application actually hits an air condition.

And finally, we’ve baked privacy right in, so you can capture very rich log messages during development and have those be automatically redacted when you ship your application through end customers.

Now, along with this [applause] We have rewritten the console application it is much richer in capability.

It gives you, for example, the ability to live stream through development devices, the ability to advance filtering and grade system introspection capabilities, so that is the new logging.

Next up [ Applause ]

Next up, let’s talk about file systems.

All right.

Now, of course, HFS Plus is the mainstream file system on Macs, and it was first released in 1998.

Today it’s deployed across every Apple product, over a billion devices.

And of course, HFS Plus was designed over 18 years ago, and I think it’s really a testament to the strength of its original design that it’s still a compelling file system today.

But of course, when it was designed, we had floppy drives on Macs, and so we thought now may be the time for us to launch a new file system.

And so today we’re announcing brand new Apple File System.

[ Applause ]

There you go.

I thought you guys might be excited by this or the crowd might be excited.

Now, the Apple File System is scalable from our smallest device, the watch, to high-end Mac Pros with very large storage configurations.

It’s also modern.

We designed it first and foremost for today and tomorrow’s storage technologies, Flash and SSD.

It’s resilient and we’ve used the opportunity to unify encryption across iOS and Mac OS, which gives us great flexibility going forward.

Now, the Apple File System has a number of new, unique features, and I want to highlight two of them for you.

The first is called cloning.

Now, why cloning?

It turns out that if you look at a system that’s been running for a while, you’ll find many duplicates of identical files.

It’s kind of human nature to copy things around.

That’s of course inefficient.

It uses up storage space.

But with clones, you’re able to copy files and only pay for that initial storage when the files are actually modified.

It’s very fast and you can clone files, directories, and directory hierarchies.

The second feature that I want to highlight is called Snapshots.

Now, what are Snapshots?

They’re really an image of the content of the file system at a point in time.

Why would you want this?

Well, let’s say you’re creating a backup application.

Using Snapshots, you can go and back up a consistent view of the file system at a point in time.

Another great example of where this is useful is in the classroom.

You may set up a device for your students that has content, configuration files, applications, and so on that you’re going to use during a class.

As the students use a device, they may inadvertently modify the content or the settings, and what you can do is you can use Snapshots at the end of the class to revert the device back to its original state so that it’s ready for the next class of students.

Now, these are features of the new Apple File System with a developer preview of it that’s part of the Mac OS Sierra release available to you today.

We’d encourage you all to download it, check it out, give us feedback, and the Apple File System will be coming to all Apple devices soon.

[ Applause ]

All right.

Finally, let’s talk about privacy.

You heard about a new, powerful technique called differential privacy in this morning’s keynote, and I will attempt to explain that to you.

I will start with the formal math behind differential privacy and remind, just note that while it does look complex, I can guarantee you the math actually works.

And I’m going to actually instead explain it to you using a couple of examples.

So the first example is we’re going to use differential privacy to try to resolve once and for all one of the most controversial and important questions of modern computer science code formatting [laughter].

This is of course an important question.

We’re going to poll our audience to get the answer to this.

It’s also something that is where you want your answer to remain private because let’s face it, some of your coworkers may be pretty passionate about this.

So how do we do this using differential privacy?

Well, first, each of you would provide your preference, but before we send that over to Apple to aggregate the results of the survey, through differential privacy, we add noise to each answer.

And after we’ve added noise, we actually have no way to know what you originally answered.

We send this to Apple, and the beauty of differential privacy is that after we aggregate this data over a large population, we’re actually able to recover the answer to our question.

Now, I’m not going to pass judgment on this, the actual answer.

It looks like of course we’re still very divided in our opinions on this, but I will remind everyone that after you’ve run your code through the compiler, this doesn’t actually really matter, so [laughter] that’s [ Applause ]

Now, how are we actually applying differential privacy in iOS X?

Let me give an example of this.

In Spotlight, we provide suggestions for deep links and we’d of course like to surface the most relevant and popular suggestions to our end users.

And so the way that we do this, we assign to each of our deep links a unique hash, and as users navigate on their device, whenever they encounter a deep link, we take that hash, add noise to it, extract a fragment of it, and send it to Apple.

Now, any one of these fragments on their own is completely meaningless, but when we aggregate this across our entire user base, we’re able to recover the popularity of our deep links and then use that to surface them in Spotlight and improve our user’s experience.

That is the science behind differential privacy.

There’s one more aspect of it though that I want to touch on.

You might think, well, if we’re capturing samples from users and this works by capturing many samples from many users, what happens if Apple captures too many samples from one user?

Could you not then figure out what I’m doing?

And this is where the privacy budget comes in.

The privacy budget limits the amount of samples that we can capture from any given user and it ensures that in the end, we can never recover any meaningful information from any one of our users.

So that is differential privacy.

It’s a powerful technique that allows us to learn from our users, improve user experience, and still maintain your privacy.

So with this, I’m going to hand it over to Toby, Toby Paterson, who will be talking about higher level features in iOS X.


Thanks, Sebastien.

[ Applause ]

Good afternoon.

There we go.

So, you know, it’s really thanks to all of you that we have such a rich ecosystem on iOS, and we are constantly looking for new ways to help users find applications, get into the apps they like to use, and to integrate your applications across the rest of the OS.

You saw earlier how your app can propagate virally through messages, and we’re making it really easy to tell people about your application right here from the home screen.

We’ve added a new Share button to the Quick Actions List.

This brings up the Share Sheet, and so now you can Tweet your application to the whole world, and this is available for free in every application.

Now, we have a lot of ways that we try to get users into the right app at just the right time.

I’m going to move through them quickly, so bear with me here.

Handoff lets you carry on a task from one device to another.

Spotlight can link directly to your application content.

A Universal Link will take you to the most appropriate place for a platform, and you can link directly from one application to another.

Siri will suggest apps here in Today view and in Spotlight.

We can suggest an app at just the right time in the App Switcher on the lock screen.

We can connect apps based on common data types like this and this and this.

Well, I think you get the idea.

There’s only one thing I really want you to take away from all of this, which is that NSUserActivity is your gateway to a whole ton of functionality.

It’s how your application tells the OS what people are doing in your app, and that allows us to create intelligent suggestions and connections between your applications.

And in iOS X, we’re adding two important pieces of information.

Now, many apps use addresses in a variety of ways, and we can use that to make connections between your applications.

Let me illustrate with an example.

This is the Yelp page for one of my favorite restaurants in San Francisco.

Now, suppose it were to provide an NSUserActivity with this address on it.

That would allow me to do things like asking Siri to just take me here.

My phone knows that I use Uber a lot to get around, and it can make it really easy for me to order a ride directly to the restaurant.

Or when I’m typing in a text field that’s expecting location data, QuickType can suggest an address that I was just looking at and Maps can include that in its list of suggestions along with a quick way of getting back into the application.

Now, we also interact with people in a variety of ways, and the OS can learn which app I use to communicate with any given person.

To do this, your app needs to give us three pieces of information.

Enough context so that we can find an entry in the address book for this person, the kind of service that you’re providing, that’s a messages or video chat platform, and the specific identifier or handle that you’ll use for this person.

Now, this is the new Address Book card, which we’re making available a lot more prominently throughout the OS, and you’ll notice that we can automatically include in here information that we learned from your applications.

Now, when I tap on one of these new Quick Communication buttons at the top of the card, we can also include your app in the list of options.

When I make my choice, we’ll remember that so that the next time when I tap this button, we can take you straight into the application.

I should point out that all of this learning is private to the user and accessible only on their devices.

Now, we deeply believe that integrating your apps across the OS makes for a much richer user experience.

Extensions, of course, are how you do that, and you’ve heard earlier about the new iMessage Apps, Maps extensions, and SiriKit.

Well, I’d like to tell you a little bit about two extension points that we’re adding to Notifications.

A service extension runs in the background and allows you to modify the push payload before we show the notification to the user.

It lets you do things like downloading an image, a video, or an audio file in the background and embedding it directly in the notification.

Or you could encrypt your push payload on your server and use a service extension to decrypt it locally on device, providing for full end-to-end encryption.

[ Applause ]

Now, I should point out you may want to use something a little stronger than the ROT13 algorithm that we’re proposing here.

We’re going to double ROT13 it next year for extra security [laughter].

Now, if you want an even richer user experience, a content extension can provide an arbitrary view that we’ll use for the expanded look of a notification.

This lets you provide a dynamic and interactive experience that’s really tailored to your application.

This morning, you got a quick tour of the revamped Today view with its vibrant, new look for widgets.

Now, a widget can still be any size you want within reasonable limits, but we’re also adding a new, compact, fixed size to optimize for better information density.

And the thing that we’re really excited about is making these widgets available right here from the home screen.

I’m going to tell you what you need to do to get this functionality in your widgets.

The first thing you want to do is probably update your look and support the new compact size so that your widget doesn’t look too funny.

You need to build with the iOS X SDK, and that’s it.

Nothing else has changed about how you build your widgets.

You just get this new functionality for free, and so we’re really excited.

We think this is going to open up a whole new dimension into your application.

Okay, let’s switch gears now and talk about the Mac.

So Mac OS Sierra is adding full support for localizing your apps in right-to-left languages, including reversing UI elements in places where that makes sense.

The Mac now joins with iOS and watchOS, which quietly introduced support for this earlier in the year.

And with so many potential customers all over the world, it’s really more important now than ever before that your app be properly localized and internationalized.

And you can learn more about that here.

Now, you know, when Sebastien first started talking about tabs versus spaces, we were pretty sure this was where he was going.

You saw this morning how people can gather together all of their windows into a single tabbed UI.

AppKit will take care of pretty much everything here for you and it’s smart enough not to pair your preferences windows with your document windows, and so on and so forth.

And in fact, if you’re using NSDocument, there really is nothing else that you need to do in your application.

If you’re not using NSDocument, there’s a little bit of API you need to adopt to support creating a new tab, but I’d really encourage you to take a look at whether NSDocument isn’t appropriate for your use case.

You see, we also showed you this morning how we’re making it easy for people to move their documents and their data into the cloud, and we really believe that this is the future of file storage.

So it’s super important that your app adopt best practices in terms of file coordination and metadata query.

And here too if you’re using NSDocument, it will take care of pretty much all the heavy lifting for you, along with UIDocument as counterpart on iOS.

Now, I have an important update for you about iCloud.

As you know, the iCloud APIs are available on all of our platforms, but on the Mac, use of these APIs has been restricted just to those apps distributed via the Mac App Store.

Well, in Mac OS Sierra, we’re removing that restriction.

[ Applause ]

Now, your app still needs to be signed with a valid developer ID, which by the way, will also get rid of those pesky, untrusted developer alerts.

But once you’ve done that, you can use all of these API no matter how you distribute your app to your customers.

Next, CloudKit.

We introduced CloudKit two years ago, and it’s the foundation that we use for building all of our new cloud services.

Now, it has a fairly coarse-grained permission model.

Either your data could be accessed by everybody in the world or it’s restricted to just a single user.

Well, the new CloudKit sharing feature opens that up and gives your app explicit control over who can access your data.

[ Applause ]

The new CKShare class gives you a, governs the permissions, who can read and write a given set of records, and this API is available on all of our platforms.

On the Mac and on iOS, we’re providing standard UI for taking care of the mechanics of inviting people and managing people in your application.

We use CloudKit sharing for the new collaboration features that we’ve built into Notes, and I encourage you to check it out in the developer preview today.

With that, I’d like to hand off to Josh Shaffer [assumed spelling], who’s going to give you some updates on watchOS.

Thank you.

[ Applause ]

Thanks, Toby.

As you heard in the keynote, watchOS 3 simplifies navigation and optimizes performance.

These improvements go beyond just the system level and include many enhancements to the apps as well.

These enhancements focus on three key qualities for Watch apps.

The first is that they be glanceable.

Individual interactions with Apple Watch are short, so it’s important to present well-designed, simple information to the user, focusing just on what’s most relevant to make sure that it’s easy to digest.

The second is that they be actionable.

Now, this includes simplifying access to the most common actions taken on the watch itself.

For example, we’ve redesigned the Fitness app to reduce the number of steps necessary to start a workout.

It also means simplifying access to key information that your users will want to act on even if they won’t take the action on the watch itself.

And the third is ensuring that your apps remain responsive by keeping them up to date and ready to act as soon as they’re needed.

Sure, thanks [laughs].

I agree. Okay, so when you think about an app, the first thing that may come to mind is an app that takes up the full screen.

Now, that’s definitely part of it, but on Apple Watch, it’s important to keep in mind that your app has access to two other great interfaces in the form of complications and notifications.

So these three interfaces are just three different views into your one app, so it’s important that they present consistent information.

When you update any one of them, you’ll want to update all of them because if they’re displaying different information, then your users won’t trust any of them.

Now, you may remember that in watchOS 2, there was a fourth interface called Glances.

Glances provided simple access to the, a summary of your favorite apps’ information.

In watchOS 3, this functionality is now provided by the new Dock and is instantly accessible from anywhere with just a press of the side button.

If your watchOS 2 app included a Glance, that separate interface is no longer necessary in watchOS 3, but its simple design can help you update the primary interface for your application to make sure that it looks and works great when viewed from the Dock.

Apps in the Dock are instantly responsive, helping to make sure that the most common tasks taken on the watch can be accomplished in just a couple of seconds.

Now, that’s obviously a very short time, but striving for it can be a great guide in helping you design the top-level features of your apps to make sure that they work really well on Apple Watch.

While each individual interaction is short, some apps may be used multiple times over a longer period of time.

For example, I may refer back to my shopping list many times while I’m at the store.

In watchOS 3, each time I raise my wrist, I can instantly see the items I still need to get and check off the ones I’ve already picked up.

It’s now really easy to design apps that have this kind of interaction model because watchOS 3 will return you to the last app that you were using for up to 8 minutes after you last used it.

Once you’re done using it, you’ll be returned to the watch face.

[ Applause ]

When your app’s not in use, ensuring that it stays responsive means keeping it up to date in the background so that it’s ready when you do want to use it.

To make this really easy, watchOS 3 includes a brand new set of Background App Refresh APIs.

Background App Refresh is a critical part of building responsive watchOS applications, and periodically updating your app in the background can be a great enhancement no matter what type of app you’re building.

Now, if you’re building a fitness app, for these types of apps, it’s even more important that they remain up to date for the entire duration of the workout, so we have some additional enhancements just for them.

During a workout session, these apps will now run continually in the background even while the screen is off, ensuring that they can monitor workout progress and provide updates to the user when they reach key milestones using haptics.

During the workout, they also remain instantly available even if you switch to another app to perform another task.

So if I jump out to the Music app to change the current track, after I’ve dropped my wrist, I’ll very shortly be returned back to my workout.

We’ve also enhanced the access to the heart rate and accelerometer sensors so that they now provide continuous values for the entire duration of the workout as well.

[ Applause ]

In addition to these sensor enhancements, we’ve also got a whole bunch of great, new hardware access APIs.

Crown events give you raw access to rotation events from the Digital Crown.

Gesture recognizers make it really easy to add custom tap interactions to your apps, touch interactions such as tap, swipe, and pan, and of course, the gyroscope is now also accessible in addition to the accelerometer.

[ Applause ]

To make sure that you can do really cool things with all of these hardware access APIs, we’ve also brought a bunch of graphics and media frameworks to the watchOS SDK.

SpriteKit and SceneKit give you a ton of flexibility to enhance the visuals of your applications and also your notifications.

AV Foundation makes it really easy to play sounds out of the watch speaker, and in the spirit of simplifying navigation in your apps, you can now play video back inline in your application interfaces as well [applause].

Of course, you also often need to get data on and off the watch, and to help make that easier, as Toby mentioned, CloudKit is now part of the watchOS SDK.

Because it’s built on top of NSURL session, it works even when your phone’s not present and your watch is near a known Wi-Fi network.

Apple Watch is already a great way to pay for physical goods in stores, and in watchOS 3, you can now offer physical goods for sale within your own apps to be purchased with just a double-tap of the side button.

Building a great watch app is just the first step.

It’s also important that you make it easy for users to discover and install your apps.

And to help you with that, the new watch face gallery in iOS X includes an entire section devoted just to displaying all of your apps.

With just a few simple steps, you can create a complication bundle which enables your app to appear in the watch face gallery.

It’s the first thing that a new watch owner will see after they’ve paired their watch, and it displays the complications from all their favorite apps that they use on their iPhone every day.

This makes it really easy for them to get their apps installed and added right to their watch face.

We’ve been working really hard to give you all the tools you need to build great, glanceable, actionable, and responsive apps, and we can’t wait to see what you’re going to do with them.

To give you some ideas of how this can help enhance your apps on watchOS.

Eliza Block will now come up and give us a demo of Background App Refresh and some of the new graphics APIs.


[ Applause ]

Hi. So I have here an application that I built for watchOS 2, which shows Max the panda, my friend, and it tells me what his mood is at different times.

So right now, you can see that he’s happy.

If I now suspend the app, you can see that I have a complication also telling me that he’s happy, but here is a notification that tells me that suddenly Max has become hungry.

Now, this app for watchOS 2 has a few problems which I’m going to show you now.

The first is that when I dismiss the notification, my complication has not updated to reflect the fact that Max is now hungry.

And even worse, when I go into the Dock, the snapshot of the app in the Dock also has not updated, so I’m failing to show a unified set of data across all the interfaces to my app.

So going now to the code, there’s a really easy way new in watchOS 3 to address these issues.

In my extension delegate, I have a single funnel point, Handle Background Tasks, which is the perfect place to update all of these interfaces as my data changes.

Here in my Snapshot Refresh background task, I’m going to add just two lines of code to address both of these problems.

The first is to update my interface for my current friend’s status, and second, I’m going to also reload my complication as well when the snapshot is taken.

Okay, so that should address the problems that we saw.

But while we’re in here, let me make this app a little bit more fun because as you saw, Max was kind of just a static image in watchOS 2.

But in watchOS 3, we can incorporate a SceneKit scene to make some animation happen.

So I’m going to switch over here to my interface, and here’s my application interface.

I’m going to delete the image and drag in a SceneKit scene.

Now, I wrote a tiny bit of code earlier just to populate this scene with some SceneKit assets and to make an outlet, which I’ll connect right now.

Drag it into my SceneKit interface.

Now, not only can you add a SceneKit scene to your application itself, but you can also add one to the notification, which will be a lot more fun than a text-based notification.

I’m going to drag one into there as well.

I need to resize this a little bit.

Hook it up to the outlet.

All right, and with those changes, I’m going to go ahead and run this application again, and we’ll hopefully see something a little bit more dynamic this time.

So here we have a much happier looking version of Max running around, and when I now suspend the application, wait for him to get hungry.

We see a really sad, more convincing version of Max rubbing his tummy.

[ Applause ]

So I think this is a lot more fun.

When I dismiss the notification, my complication has updated as we hoped to reflect his current status, and if I look in the Dock, you can see that the snapshot has also now been updated to reflect that he’s hungry and no longer happy.

So that’s just a few of the new things you can do which, with the new APIs in watchOS 3.

We’re excited to see what you guys build.

And next up, I’d like to invite Jim Young to the stage to talk about tvOS.

[ Applause ]

Thanks, Eliza.

We’ve been super excited since bringing the App Store to Apple TV just this last October because we believe the future of TV is apps, and since then, you guys have been very busy bringing over 6000 apps to the App Store, including entertainment apps, games, health and fitness, and education apps, and many more.

So let’s talk about what it means to develop on tvOS.

First, it’s already familiar to you.

You use all the tools and languages you already know.

And tvOS includes the essential frameworks that you’re familiar with from iOS.

Plus, we’ve got some new frameworks specific to tvOS that we’ve built.

There’s all the foundational support you need for building great apps.

We have a rich media stack for audio/video apps.

And of course, we’ve got excellent support for games.

And we’re bringing even more features to the platform.

With PhotoKit, you can now create these great apps that showcase users’ photos on their big screen, and we’ve made it super easy to add support for light and dark, excuse me, our light/dark appearance.

So it’s a rich set of technologies, and this is just a partial list.

There’s a whole lot more.

Let’s talk about using all these technologies and bringing them all together to deliver a great experience on tvOS.

And I want to start with Touch.

When designing tvOS, we knew we wanted to bring a first-class Touch experience into the living room.

Now, Touch needs to feel fluid and connected.

It needs to be predictable.

It needs to be fun.

That’s a particular challenge in the living room when the TV’s 10 feet away.

So let’s look at how we solve this.

We have updated UIKit so that all the controls and views look beautiful on the big screen.

We’ve added UIFocus to allow you to indicate which elements of your UI are focusable.

And we have updated UIMotionEffect so it takes input from the focus engine.

Now, altogether this provides a direct connection between what the user is doing with the remote and what they see on the big screen.

Good news here is that we’ve done all the heavy lifting for you.

Your UIKit apps will get all of this for free.

Now, if you have a server-based app, apps used to deliver content.

We’ve got a lot of great apps like this one from Showtime.

We have a technology called TVMLKit, a framework called TVMLKit.

TVMLKit is a new, high-level framework we built specifically for tvOS.

It’s built on top of UIKit so you get all that connective Touch experience that we just showed.

We provide a large set of templates, and you can even provide your own custom templates and native controls.

TVMLKit has allowed teams to build these beautiful, highly stylized, customized apps in a short amount of time.

Let’s now talk about how you can extend the experience outside of Apple TV by integrating with other Apple devices.

One easy thing to do is to save app and game state in the cloud using CloudKit.

Making it easier for users to start a game on their Apple TV and pick it up and resume right where they left off on their iPhone or iPad.

We’ve also seen some great apps that use multiple Apple devices at the same time.

With SongPop Party from FreshPlanet, the whole family can play using their iPhone to answer music trivia questions.

And to make it even easier to create apps that talk to each other across devices, we’re bringing Multipeer Connectivity to the platform.

Just a few lines of code [ Applause ]

Just a few lines of code, you can easily connect to your apps running on a different device.

Now, this morning, we announced the new Apple TV Remote app.

Now everyone in your household that has an iPhone can have an Apple TV Remote in their hands.

To your apps it looks and behaves just like the Siri Remote, but we’ve added a couple features to take advantage of the screen.

The Now Playing screen provides a full set of playback controls as well as cover art for your media.

Just like showing art and controls on the iOS lock screen, use the Media Player Remote I always get this one Media Player Remote Command API to add this information to Remote app.

The app also provides a screen designed for gameplay.

To your game, the new app shows up as a micro gamepad just like the Siri Remote.

To take advantage of multiple micro gamepads, you need to opt in by setting an Info.plist key.

In addition to Siri Remote and the new Remote app, tvOS also supports MFi game controllers.

And coming this fall, tvOS will also support up to four simultaneous game controllers.

[ Applause ]

But we didn’t want to stop there.

We wanted to enable even more awesome games.

We wanted to enable games that might require these kind of controllers, so we’re happy to announce that we’re updating our controller policy.

You can narrow your games to require a game controller.

[ Applause ]

tvOS will check if the required controller is present and prompt the user if it’s necessary.

That’s an update on tvOS.

We’re super happy to be here.

We’ve got a lot of labs and sessions with the tvOS engineers there, so please come by and stop by.

Next, I want to introduce Geoff Stahl is going to come up and talk about graphics technologies.


[ Applause ]

Thanks, Jim.

So I’m going to talk about graphics.

I’m going to talk about a couple areas of graphics.

We have really great things to talk about today, and I’ll start with color.

So we’re building astonishing new displays for our latest iPads and Macs, ones that are capable of reproducing absolutely stunning color.

In fact, as DisplayMate says, “The color accuracy is visually indistinguishable from perfect.”

So how do we do this?

Well, this is all about color gamut.

So most displays are built with the sRGB color gamut, which is narrow and doesn’t reproduce all the colors you would see in life accurately of things like flowers or paint colors or maybe even the clothes you wear.

So we’re moving to the wide P3 color gamut, which contributes to our ability to really accurately reproduce and render these real-life objects.

But we’re not stopping there.

We’re going to go all in on color.

From our system APIs to our system applications, we support deep and wide color throughout.

And if you’re an application using UIImageView or UIView, you get it for free.

You get it automatically.

If you have to manipulate wide colors yourself synthetically, we also offer APIs for that.

And of course, we have a great capture story.

Our latest cameras support capturing deep and wide color.

We have APIs to access raw images.

And we can also now capture with APIs LivePhotos.

So that’s color.

[ Applause ]

I’d like to talk about technology now that’s changed the direction of industry, and that’s Metal.

Metal we introduced two years ago and has been wildly successful.

And keep in mind, when we developed Metal, when we enhance it, we enhance it with our devices in mind.

Which allows us to innovate in a rapid pace with things like Metal tessellation that allows high-order surfaces for rendering accuracy like never before or Metal function specialization.

Which when combined with Xcode allows you to build a set of shaders that automatically handling your material and lighting properties in your scene.

And for things like memoryless render targets, which use the architecture of our Tile cash to reduce the amount of memory used in your application or resource use.

Which allows your application to specialize the way it handles memory within a Metal app.

And Metal is everywhere.

Hundreds of millions of devices use Metal.

Our key graphics frameworks and major game engines are built on top of it.

So whether you’re using a high-level API or programming directly to Metal itself, you get all the performance optimizations we’ve built in.

Another thing about Metal, it’s one of our foundational technologies of our games ecosystem.

So let’s talk about games.

Over the past few years, we’ve built a great games ecosystem, and the goal of this is to build the APIs and tools which allow you to take your ideas in gaming and turn them into reality.

I want to cover some highlights here today.

First, as mentioned this morning, Replay Kit.

We introduced Replay Kit last year, and it’s really super easy to adopt.

It allows your users to start a recording, play their game, edit that recording, and then share with their friends.

Well, this year, we’re going to take it up a notch with ReplayKit Streaming.

Now, Replay Kit can live stream to services that support the ReplayKit Streaming Extension.

This allows your users to not just go from, not just share with their friends but stream live to the internet automatically with ReplayKit Live Streaming.

And the absolutely.

[ Applause ]

And even better, it’s about three lines of code to adopt if you already use Replay Kit.

This is super easy and could really expand the social reach of your applications.

So speaking of social, let’s talk about Game Center.

We’re changing the way that Game Center Multiplayer works.

Now with our latest OS’s, you can invite anyone you can send a message to.

So your users can invite anyone they can reach out via message, and it is that easy.

It’s just as easy as sending a message.

And the cool thing here is if you’re already using the games that are multiplayer APIs, you don’t need to do anything.

With the latest OS’s, this just works, and we’re not going to stop with that.

We’re going to add new APIs to Game Center.

Game Center Sessions.

Game Center Sessions creates a persistent, shared experience where users can come and go.

This means you can design your games with the mobile user in mind the way they come and go into your application that’s allowing a new paradigm in multiplayer gaming in the mobile space.

Finally, GameplayKit.

GameplayKit is our component-based API that allows you to instance and customize these components and make the building blocks for your game objects, letting us do the heavy lifting for you.

And I’d like to invite Norman Wang on stage to give a demo of these components and our Xcode gameplay tools.

[ Applause ]

Thanks, Geoff.

I’d like to show you how quickly I can build a game in Xcode 8.

Here’s a game project I’m currently working on.

It’s a competitive game involving a hero and enemies throwing paint balloons at each other.

Now, I have been using the new APIs from SpriteKit and GameplayKit this year.

As you can see, the game needs three major elements to be ready; a map, a hero, and the gameplay pieces.

I have already implemented the hero’s movement and animation, but it is clearly missing the collision with the island bounds, so let’s go ahead and fix that.

Opening our Xcode project and looking at the source code section, I have already written a few gameplay behaviors using GKComponent provided by GameplayKit, like the fight component here.

And in Xcode 8, I can now expose any of my class properties that’s defined in my class to the 2D editor then I can customize whenever the behavior gets attached to one entity in the scene.

So to build the island map, I’ve been using the new Tiled Map Editor.

So in my game, there’s three different tile sets that I have specified.

There’s sand, water, and grass.

Not only do I have the ability to specify both the interior and exterior tiles, I can also introduce variance.

So here for the red edge, for example, for this individual tile image, if I want to add a new look to it to get rid of the small rock, I can simply just drag it in from the Media Library.

And now this tile set is complete.

Let me go ahead and show you how I can use this.

So switching back to the island map, the island is constructed with the new tile map functionality with, provided by SpriteKit, so to make a modification, I can simply select the tile map.

Make a double click, and select the active tile that I want to work on.

Now that I think the island looks a little bit too plain, and to make such modifications, I can just simply paint across the level and Xcode will automatically paint the correct tiles to make it with surrounding neighbors.

It’s that easy.

So now I think I got the look I’m going after with the island.

Let’s look at some of the gameplay elements here.

To power the hero, our hero’s movement and animation, I have already attached the player input component and the move component.

And to make the player respect the island boundaries, I can simply add a collision component to have implemented.

So this way, it will automatically create a physics body for our hero and the islands based on the current tile sets that’s being set up.

And to give the player the ability to throw the water balloons, I can simply add the fight component to it.

And notice here how the health property being exposed by the fight component class is now visible.

I can give it a custom value here.

So for example, I can set the value to be 2 rather than using the default value of 1.

And in addition, I have a drone that drops the, these paint balloons on the level and I have a enemy that’s in the scene.

So to give the enemy the same fighting ability by picking up the paint balloons and throw at me, I’m going to give the same fight component.

And to make this a fair game, I’m going to give the same health level of 2 rather than using the default value of 1.

Now, I think my level is set up pretty good.

Let’s run it and check it out, the new implementation.

So the drones are going to be dropping randomly the paint balloons and the paintball fighter will start.

So looks like all the elements are all there.

So in Xcode 8, it’s very easy to implement the level of the game and connect all these gameplay logics fairly quickly.

Because I’m using SpriteKit and GameplayKit, my game will automatically run on all Apple platforms.

Thank you.

Now, back to Andreas.

[ Applause ]

Thank you, Norman.

All right, so just quickly review the most important points we talked about.

There are now four Apple OS platforms, each of them with their own App Store that you can bring your ideas to.

And today we added a large variety of new extension points that allow you to hook even deeper into our OS’s.

Perhaps most importantly, we recommend that you consider creating an iMessage App and also that you integrate your apps with SiriKit.

But these are just two of the many new APIs we are announcing today.

And support for all of them will be provided by a new version of our Xcode IDE.

Xcode 8 is going to run on Mac OS El Capitan and Sierra.

Now, of course, it also has support for developing with Swift 3.

Now, all these technologies are available for download from the WWDC Attendee Portal today, so you can get early access and use the time later in the year when we ship out products to get ready for all launches, create even more powerful and unique apps for your users.

Also, go and check out the new Swift Playgrounds app, which we’re including in the iOS X Developer Preview.

I think you’re going to have a ton of fun with it and perhaps you’ll consider creating additional content for your kids to start learning how to program Swift.

Now, there are many opportunities here at the conference for you to learn more about everything that we announced today.

There’s more than 100 sessions you can attend and even more labs.

You can meet every engineer you see on site one on one.

They are here to answer you specific questions.

So I hope you enjoyed the session and I’ll see you around later this week.

[ Applause ]

Apple, Inc. AAPL
1 Infinite Loop Cupertino CA 95014 US