Platforms State of the Union 

Session 102 WWDC 2017

WWDC 2017 Platforms State of the Union

Ladies and gentlemen, please welcome Vice President of Platform Technologies, Andreas Wendker.

[ Applause ]

Good afternoon.

Welcome to WWDC.

As you heard this morning in the keynote, this is a year with a strong focus on technology fundamentals and refinements across all Apple product lines.

We’re introducing numerous APIs that enable new use cases for your apps, covering a broad spectrum that ranges from machine learning to augmented and virtual reality, to access to the music, to the Apple Music Library.

We’re also making improvements to many of our core technologies.

For example, our new modern file system, APFS, is now even more powerful.

That created a new, much faster version of method, and we helped define and adopt a new high-efficiency image format with advanced compression.

Now, most of these technologies and APIs apply to all our operating systems across the board, so they’re moving forward together.

But let’s also take a look at some specific highlights that affect you as software developers.

iOS 11 has a huge number of incredible features, but perhaps most importantly, it is the biggest iPad release ever, and it turns iPad into a major productivity device with a new Dock, drag and drop, find management, and enhanced multitasking.

In macOS, we really took the time to broadly clean up performance and animations.

And for the additional support for Fusion Drives and for disk encryption, APFS is now ready to come to the Mac as a fully supported file system.

We also laid the groundwork for turning the Mac into a virtual reality powerhouse.

And for watchOS, we made our UI components more flexible so that you can create more custom user experiences.

New modes for navigation and audio recording allow your apps to continue processing data and providing feedback to the user, even when running in the background.

And the new unified app runtime keeps your apps more responsive.

We also added support for Core Bluetooth so that you can now directly connect to BTLE devices and display data from them right on the user’s wrist.

And in tvOS, we implemented a wide variety of enhancements that you can take advantage of for rounding out your apps’ functionality for all your users, including right-to-left support for the appropriate languages.

And beyond our operating systems, we’re also hard at work on improving our general developer ecosystem and the support we give you for marketing and shipping apps.

We increased the number of users you can have, when you ship beta versions of your apps in TestFlight.

And the App Store now allows you to rule out updates to your apps over a period of several days.

You’ll be able to respond to user feedback, user reviews of your apps.

You’ll be able to post promotional materials on your Store pages and much, much more.

And we are modernizing the content in our App Stores.

We’ve been working for a while now on getting all iOS software to take advantage of the power and performance for our 64-bit processors.

With iOS 11, we’re completing this transition.

We’re going to thin iOS 11 to be 64-bit only, and 32-bit apps are not going to be supported anymore.

And with that, it’s now time to turn our attention to doing the same with macOS.

The transition to 64-bit-only Mac apps is going to take place very similarly to how we handled it on iOS.

Most importantly, High Sierra is going to be the last macOS release to fully support 32-bit apps without compromises.

In the next major release after High Sierra, we’re going to aggressively start warning users if apps are not compatible for 64-bit.

We’re updating the Mac App Store rules accordingly.

In about six months, we’ll require that all new apps submitted to the Mac App Store will be 64-bit capable and, in a year from now, we’ll require all new apps and updates to existing apps to be 64-bit only when submitted to the Mac App Store.

So for the next 90 minutes or so, we’re going to go deep on the most important APIs and technologies that we’re announcing today.

We’re also going to cover our developer tools in depth.

In fact, we’re going to start with Swift Playgrounds.

I’m going to hand it over to Matthew Firlik for that.

[ Applause ]

Thank you, Andreas.

Swift Playgrounds has brought something new and important to education teaching programming to kids in a fun way using real code.

And the response has been tremendous.

After less than ten months on the App Store, over one million people have started to code with Swift Playgrounds.

And these users are from all over the world, as we made the app and content available in six localizations.

In fact, almost two-thirds of these users are from outside the United States, making Swift Playgrounds an international success.

Since our first release last year, we’ve added a number of new features to the application, such as line-by-line highlighting of your code as it’s running, content notifications when new content’s available, and the ability to add new Playground pages.

These features and others alongside new content has given Swift Playgrounds fantastic momentum.

Now, as you may have already heard, today we’re making a new version of Swift Playgrounds available, version 1.5.

And in this release, we’re making it easier for users to connect to Bluetooth robots, drones, and devices.

And this takes Swift out into the real world, venturing even outdoors.

And it harmonizes naturally as your programming skills evolve.

Now, with this release, we have worked with some amazing partners who also wanted to design great experiences for their devices using Swift, like Parrot with their mini-drones who can fly and flip, UBTECH with their buildable and programmable MeeBot, Sphero with their dynamic Sprk+ robotic ball, Wonder Workshop and the racing and talking Dash, Skoogmusic and their Tactile Musical cube, and MINDSTORMS EV3 by LEGO.

[ Applause ]

The limitless possibilities you can build with MINDSTORMS make it a perfect pairing with Swift Playgrounds.

In fact, all of our partners have truly embraced the interactivity of the app, designing intuitive APIs you can drag and drop and presenting up-to-date device and sensor data in the live view.

And this combination is truly captivating as you see your thoughts become code and bring devices to life.

Now, all of our partners have designed Playgrounds to work with their devices, and you can find them in the new Accessories tab of the app, where there are samples to play with and templates to create your own file.

And we are truly eager to see how users of all ages take advantage of the new Swift Playgrounds because we believe it is the best way to control robots and drones with code you write yourself.

The new version of Swift Playgrounds is available today for free in the App Store, and you can find many of the Bluetooth accessories from our partners at an Apple Store near you.

Now, there’s another version of Swift Playgrounds I’d like to share with you today, and that’s Swift Playgrounds 2, shipping later this fall.

Today, we have engaging content, like our Learn to Code series, challenges, and the new Playgrounds from our partners, but we also know that you have many inspired ideas too, so this fall, we’re going to make it easy for everyone to share Playgrounds.

With Swift Playgrounds 2, we will support feeds of third-party content that users can subscribe to.

And this content makes it really easy for schools to host curriculum, developers to post API explorations, and friends have a simple way to share ideas.

This release will also include an integrated documentation experience, support for Swift 4 in the iOS 11SDK, and enable the use of camera and augmented reality APIs.

And we’re going to extend global reach of the application by adding in localizations for eight additional languages.

Beta versions of Swift Playgrounds 2 will be made available via TestFlight, and you can sign up at to try out these releases.

So two big versions of Swift Playgrounds for you.

I’d like to switch topics and start talking about Xcode 9.

Xcode 9 is a major release.

We have made significant investments in our foundations, work flows, and performance across the tools with some pretty substantial results.

I’d like to start today by talking about our Source Editor.

In Xcode 9, we have completely re-implemented our Source Editor.

It’s been rewritten from the ground up in Swift.

[ Applause ]

And I agree.

It’s a pretty big deal.

So at first glance, you’ll see the familiar crisp presentation of multiple fonts and weights, but now with more options for line spacing, cursor types, and more.

And we took the notion of semantic presentation and didn’t stop at source code because we now have an integrated Markdown editor as well.

[ Applause ]

In addition to stylized editing, you’ll find the familiar editor experiences, like using the Jump bar for navigation, Command-clicking on links, even using Edit All in Scope to change references.

Now, when you start working with your source code, you’ll find that issues now display smartly, no longer adjusting the layout of your code.

Issues now include more detail yep.

[ Applause ]

Issues now display with more detail and have the Fix-it work flow integrated right alongside, including the ability to apply multiple Fix-its with a single click.

[ Applause ]

And this new issue presentation is the perfect canvas for the over 300 new diagnostics, analyzers, and Fix-its we’ve added to help you all write great code.

Now, performance is also important, and in this release, you’ll find the editor opens files three times faster, you’ll find smooth scrolling at 60 frames per second as you move about the file, and jumping to lines is 50 times faster than before.

[ Applause ]

To put it simply, everything happens instantaneously.

Now, the editing experience is also new in Xcode 9.

Our editor now has more semantic awareness, and we’ve brought over the tokenized editing experience from Swift Playgrounds.

As you move about your source, we’re going to highlight the structure underneath.

[ Applause ]

And when you click on one of the tokens, we’re going to present an Action menu full of symbolic transformations to help you evolve your code.

[ Applause ]

And I’m guessing some of you know where this is going next.

And yes, it is true.

Xcode 9 has a brand-new refactoring system for it.

[ Applause ]

Indeed. Our refactoring system works with Swift, Objective-C, C, and C++, so you can use it throughout your projects.

And the refactoring work flow takes advantage of the new tokenized editing experience to present contextual actions, like Extract and Rename, and all of these actions are bindable, so you can set key equivalents for ones you use frequently.

And we also have a gorgeous presentation to help you review changes.

And to show you what this is like, I’d like to bring up Ken Orr to give you a demonstration of the new Source Editor with refactoring.

[ Applause ]

Thanks, Matthew.

So Xcode 9 has a brand-new Source Editor rewritten from the ground up to be fast, and I think speed is one of the very first things you’ll notice.

Scrolling in this 10,000-line file is super smooth all the way down to the bottom, and I can just instantly right back to the top.

And of course, code, it looks absolutely gorgeous.

Let me make that a little bit bigger so it’s easier for you to see.

I’ll just press Command+ a few times for you.

[ Applause ]

It’s the little things, right.

So my project here is an iOS app, and it lets users explore our solar system.

And I want to add a little bit of code.

I’m going to jump over to this Objective-C file, and this bit of code that I just added, it has a small problem.

So first off, you can see the brand-new issues presentation, and it’s got Fix-its built right in.

Now, this particular issue is new in Objective-C, and it’s warning me that I’m using iOS 11 API, but my project here, it deploys back to iOS X, where this would crash [applause].

And this issue has a Fix-it, which I’ll accept.

And when I do, Xcode wraps my code using the new Objective-C keyword, atAvailable.

So now, that little bit of code, it’ll only run on iOS 11.

[ Applause ]

There’s another new Fix-it I want to show you.

I’m going to jump over to this SWF file here.

This is the class that represents a moon in our solar system, and I’m going to adopt the physics body protocol.

It’s got a few different methods.

Whoops. It’s got a few different methods that I need to implement.

Of course, I haven’t implemented them yet, so I get an issue.

And now, with just one click on the single Fix-it, it’ll add all the methods that I haven’t yet implemented.

[ Applause ]

So Xcode 9 can also help me transform and refactor my code.

So let me show you that.

I’m going to jump over to this solarsystem.swf file and I’m going to hold down the Command key and move my mouse.

And when I do, Xcode shows me the structure of my code.

When I click, I get a set of options and transformations that are specific to the expression that I clicked on.

So here I clicked on an expression that’s creating a moon object and then adding that to the Earth.

And, you know, I actually personally prefer to capture all the objects I create in variables, so I’m just going to extract this out into an expression, and I’ll use Edit All in Scope to give this a better name, Luna.

And, you know, while we’re at it, actually, I like my methods to be short and to the point, so I think I’m going to pull out this If block into its own method.

And when I do, Xcode creates a new method for me, adds all of that code to it, and then leaves me with a call to that method back where we started.

Let’s give that a new name too.

[ Applause ]

So those are a couple local transformations, but a lot of times what I want to do, I want to refactor the name of something that’s used throughout my entire project.

In fact, I’ve got a great candidate for that here this position method.

That’s kind of a generic name.

I think I could come up with something a little bit more descriptive.

So I’m going to hold down the Command key, click, say “rename,” and when I do, Xcode folds down the file that I’m in [ Applause ]

Folds down that file and it pulls in slices from across my project where I’m using that method.

So I’ll pick a new name here.

Let’s go with something like “orbital position.”

As I type, it updates live across all those slides.

I can rename parameter names too.

First parameter name, that seems OK.

Instead of “date” for the second one, I’ll with something like “moment in time.”

Click Rename, and that’s it.

So rename factoring works great for methods like this.

Also works for classes too.

So up here I’ve got this class, “trans-Neptunian object.”

That’s kind of a mouthful, so let’s give that a little bit simpler name.

I’m going to rename this guy “minor planet.”

So refactoring, it works for everything you’d expect.

In my project here, that means SWF files, Objective-C files, and even storyboards.

[ Applause ]

When I’m done, click Rename, and that’s it.

Really simple.

Really powerful.

That’s the new Source Editor and refactoring in Xcode 9.


[ Applause ]

Thank you, Ken.

I told you it was gorgeous.

So new refactoring is a great way to evolve your code.

In addition to the rename refactoring, we’re also going to include a number of local transformations, like extract, adding missing overrides, and working with localized strings.

But this is just the beginning because we’re going to be open sourcing the refactoring engine as part of the Apple, Clang, and Swift compilers.

[ Applause ]

Xcode will automatically pick up transformations you build in local toolchains, so you have a great way to prototype new transformations.

So our new Source Editor great look and feel, awesome performance, and now with refactoring.

A great start to Xcode 9.

And we also have some big news in Swift, and for that, I’d like to bring up Ted Kremenek to tell you more.

[ Applause ]

With Swift, we set out on a mission to make programming easier, faster, safer, and more modern.

And in a little less than three years, the pace of adoption remains phenomenal.

In that time, over 250,000 apps have been written in Swift and submitted to the App Store.

And it’s not just the number of apps that’s exciting.

It’s that companies are building apps in Swift that they really do depend on.

And beyond the App Store, enterprise and business have also embraced Swift.

IBM has authored more than 100 of their MobileFirst iOS apps in Swift, and both IBM and SAP have released SDKs that allow Swift developers to take advantage of their services infrastructure.

So we’re thrilled at how Swift has been doing, but Swift, of course, isn’t standing still.

Xcode 9 includes a new release of Swift, Swift 4, and we’re excited about what it brings.

Now, the focus of this release really is on the core fundamentals, getting those rock solid, because we want Swift to be a great foundation for everyone to build their software upon.

And so just, I’ll be able to touch on just a few of the things in this release, but we think these are just great improvements across the board.

The first thing I want to talk about is a vastly improved string.

String is, string processing is a fundamental aspect of just writing software, right.

We work with strings all the time.

And we’ve always had the goal that string processing in Swift is first in class without compromising on Unicode correctness.

And so we’ve made three important strides towards this goal in Swift 4.

First, you want to make the API a lot easier to use, right, because you’re using string all the time.

We also wanted to double down on improving fidelity with Unicode.

This is something that string has been good at, but it’s not entirely where we wanted it to be, and we wanted to make strings really fast.

So it’s not hard to look at it for opportunities of how we wanted to make string easier to use, right.

So here’s some fragments of code that you may have written in Swift 3 when you’re working with string, like iterating over the characters, or querying to see if a string has a given character, or stitching strings together.

And what we’ve found is that we were frequently reaching down for this characters view, right.

I mean, you want to get to the underlying characters, but it’s one step removed.

It also creates this unnecessary friction.

So in Swift 4, we have vastly simplified this by removing all this impedance, and now strings are range-replaceable collections of characters.

Not only does the code, you know, read and write exactly as you would expect, but you have all the power of generic algorithms on the collections directly on working with strings.

We’ve also added great syntactic [inaudible] such as multi-line string literals, which include, you know, support for white space.

String slicing is even more lightweight with the support from one-sided ranges.

So you could just very nimbly specify the fragment of the string you want in a slice.

Now, these improvements to the core ergonomics also must be accompanied with improvements for our Unicode support.

And if, when you were working with strings in Swift 3, you may have noticed some oddities like this just kind of leaving you wondering, what is going on?

And it really comes down to the richness of Unicode.

Our concept of the character is actually a composition of multiple Unicode scalar values, and decomposing these correctly is critical to getting that fidelity with Unicode, the characters, and the string API.

So in Swift 4, we’ve moved to the Unicode 9 Grapheme Breaking algorithm.

It now properly separates these strings out or these characters out as you would expect, giving you full fidelity with the underlying character set.

And this is also really important since we’re now, strings are collections of characters.

Now, this improvement to Unicode support has not come at a cost to performance.

We’ve actually finely-tuned strings implementation.

So for most string-processing operations, for English, French, German, Spanish, and really any language using a Latin-derived script, you’ll see about a three-and-a-half times performance improvement for String.

[ Applause ]

And similarly, you see huge improvements working with simplified Chinese and most modern Japanese texts [applause].

And that is the improved String faster, much easier to use, and more powerful.

The second feature I want to talk about has, is about giving you the ability to easily convert Swift types to encodable formats, like JSON and property lists, while adding, providing powerful opportunities for customization.

In Swift, we use both value and reference types, and while NSCoding supports classes, it doesn’t work at all with structs or enums.

This new feature works beautifully with all of them.

Let’s see this in action.

So here I have a simple value type.

This is a struct.

It’s a form.

It’s got some properties.

And I want to make this encodable and decodable to JSON, property list, whatever.

I could easily do this by adding conformance to a new protocol called Codable.

And if the properties of this type have types that also conform to Codable, this is literally all the code I need to write because the conformance to this protocol is synthesized by the compiler.

[ Applause ]

So works as you would expect.

I can construct a value of this type, and with a single line of code, I can serialize this out to JSON.

[ Applause ]

And once I have the JSON value I can simply reconstitute it back with a single line of code, back into a foreign value, and this is also 100% type safe.

[ Applause ]

So simple, easy to use.

We think you’re going to enjoy it.

Now, there’s a lot of great features in Swift 4, and we’ve also been excited about the adoption of Swift.

And so another key goal in Swift 4 is we wanted to make all the features easy to adopt.

And that’s why in Xcode 9 you can take your Swift 3 projects, open them up, and build them with no modifications.

This means you can be on the latest OS, tools, and even make use of many of the new language features in a release without having to convert your code.

So how does this work?

In Xcode, there is a build setting Swift language version.

It has two values, Swift 3.2 and Swift 4.0.

What this is is there’s one compiler here that supports these two different language modes.

And what you can do is you could have a single project that can mix and match targets that are built, it’s either 3.2 or 4.0.

It also means you could have an app that’s written using all the new Swift 4 features but uses a package or framework that’s built using Swift 3.2.

So it’s this extremely easy on-ramp to taking advantage of the latest features in Swift.

So what is Swift 3.2?

It’s a language mode.

You can basically build your projects pretty much with no modifications.

You can access pretty much all the new language features String, codable, all of them.

You can access all the new APIs from the SDK.

The only thing you don’t get is any Swift-related improvements to existing APIs that you may have been using before.

That’s what gives you that easy onboarding without having to modify your project.

To get those benefits, there’s a Swift 4 mode.

You have access to essentially all the new stuff.

There’s also some new opportunities to further improve the performance of your app, including reducing the size of your binaries.

There’s a lot of great improvements in the SDK for Swift developers, but to take advantage of those, there is some migration of your code required.

But these are refinements.

These are very evolutionary changes.

And compared to moving from Swift 2 to Swift 3, moving from Swift 3 to Swift 4 is very simple.

Now, we’ve been thrilled about the investment everyone has been making in Swift, and as we’ve seen that investment grow, so have the size of large Swift projects, right.

People are writing more code in Swift.

And we recognize that this is an important area for us to focus our efforts on the tools.

And so we’ve focused on two key areas to reduce build times for large projects.

The first is mix-and-match Objective-C and Swift projects.

[ Applause ]

By pre-compiling the bridging header, we’ve seen up to about a 40% improvement in built times, particularly for debug builds.

And we saw this ourselves on our, compiling our own

And similarly, if you’re using whole-module optimization, we’ve vastly improved the utilization of parallelism across the compiler and the build system, and you can see up to a 2x improvement in build times.

Now, this optimization was so important that we didn’t want to hold it back for Xcode 9, and so we actually aggressively released it in the Xcode 8.3.2.

And so many of you have already been experiencing the benefits of this build improvement.

And for those of you who haven’t been using whole-module optimization, I encourage you to give it a try, given the huge wins here.

And that is just a taste of some of the things in Swift 4.

We think you’re going to enjoy it.

With that, I hand it back to Matthew.

[ Applause ]

Thank you, Ted.

Another key area of Xcode 9 is our core technologies, and I’d like to start with our indexers.

Our source and text indexes provide the back end for many work flows, like navigation, searching, and refactoring.

And in Xcode 9, we have re-architected them to store richer data and have improved performance.

In fact, you’ll see work flows, like Open Quickly runs 35 times faster across your projects.

And searching in large projects is up to 50 times faster.

[ Applause ]

So our new indexers make a huge difference.

Additionally, Xcode 9 will include indexing while building.

Xcode will still index in the background [ Applause ]

Xcode will still index in the background but, when you’re building, will take advantage of work already being done to generate and update the index.

So when you’re done building, your index is up to date and all the associated functionality is available to you quicker than ever.

[ Applause ]

Now, our build system is also a key component and one we’ve made significant investments into to improve performance, add features, and support future goals.

And I’m excited to share with you today we have a new build system.

Written in Swift, our new build system layers on top our open source lower-level build system, llbuild.

It uses a modern, forward-looking architecture that has delivered some major improvements.

Our new build system uses process separation, has a unified dependency graph, and has improved configuration analysis.

And we’ve used our new architecture to improve parallelism and caching to reduce overall build times.

I’d like to dig into this for a moment.

The build process involves two main pieces.

There’s the build system that manages and coordinates tasks and the build tools like compilers and linkers.

Together, these make up the total build time.

Now, the work for these components grows with the size of your project.

The larger the project, the more work there is for each component.

Now, this view represents a full build.

But most of the time, we are incrementally building our projects, only changing a few files at a time.

While there is a less work in these cases for the build tools, for larger projects, there are still a number of details to manage.

This is an area we’ve focused on with Xcode 9.

In our new build system, we’ve made the build operations two-and-a-half times faster than they were before.

And if you add on top of this yep, it’s fast [laughter] and if you add on top of this the 40% improvement we’ve seen in compilation with mix-and-match projects, then you can see a marked improvement with Xcode 9 in the build, new build system compared to the previous release.

Now, the new build system is designed to be compatible with your projects, and we’ve made a preview available in Xcode 9.

You can opt in in the Workspace Settings to try it out, and we’ll be making it the default build system coming soon.

Now, another core technology area we’ve invested in is source control, and I’m curious.

A show of hands.

How many of you have GitHub accounts?

All right.

Craig raised his hand.

So you are in great company because our good friends at GitHub let us know that on the desktop, two-thirds of all pull request related activities happen from a Mac.

So we thought we could do something special here, which is why we have integrated and GitHub Enterprise with Xcode 9.

[ Applause ]

After you add your GitHub account, you can use Xcode’s new cloning work flow that presents all of your projects and search results from GitHub.

You can add stars, review project details, even check out ReadMe’s before you access the project.

Once you clone a project, you can use Xcode’s new source control navigator, and this presents all of your working copies, including details like branches, tags, remotes, even push and pull counts.

And this is just the start of some of the amazing new source control work flows we have for you.

And to show you more, I’d like to bring up Mike Ferris to give you a demonstration.

[ Applause ]

Hi. Thanks, Matthew.

Hello. I’m going to show you today some of the great new ways to work with source control in Xcode.

Let’s get started with GitHub.

I’m already c1nnected, and here in the new clone window, all of my GitHub repositories are front and center.

Today, I actually want to find the Swift Foundation Project, so I’m going to search GitHub.

Great. There it is.

Let’s go ahead and clone it.

Once the clone completes, Xcode opens the project.

Now, my friend Felipe has been telling me about some recent changes to the data class, and I wanted to take a look.

The new source control navigator allows me to explore a project.

I can select the current branch to show its history, and I see all of the commits, including avatars for the commit authors and annotations for things like tags.

I can filter commits by author or commit message, so I’ll start by finding Felipe’s commits, and then I’ll narrow it down a little bit more to just the commits that have to do with beta.

This is the one that I was looking for.

When I double-click a commit, I go to the new commit viewer where I can see the actual source changes.

It’s really easy to browse history and to find specific commits.

Now, I can also seamlessly use source control while I’m making changes, and I have another project that needs a little work that I’ll open now.

Before I get started, I want to make a branch.

In the new source control navigator, I’ll use the Context menu to make a new branch for “master.”

And, OK. I’m all checked out on my new branch, and I’m ready to go.

In my projects, I like the groups to match up with the folders, and this project is already set up that way.

But it’s been bugging me that these two camera model classes are in the wrong group.

It’s OK, though, because now when I move these files to the correct group, Xcode finally also moves them to the correct folder.

[ Applause ]

So in projects where the group and folder layout are the same, Xcode will now keep them the same.

And when I commit, all the changes will be recorded, so I’ll go ahead and commit this now, and I think I’m ready to land this branch.

Back in the source control navigator, I’ll select the destination branch and again use the Context menu to merge in the changes for my working branch.

And that’s it.

I can see that my new commit is now on the master branch, and I think I’m ready for my first beta release.

So I should probably make a tag.

I can use the Context menu here to make a tag from any commit.

OK, I think I’m ready to share this.

Now, this project isn’t on GitHub yet, but I can easily put it there.

I’ll just choose to create a new GitHub remote.

All these defaults look pretty good, so let’s go.

Xcode is now creating a project on GitHub and pushing up my repository with its entire history.

And now, my project is hosted on GitHub.

I have a new origin remote.

And from there, I can jump directly to the Project page on

Here it is.

Now, we’ve worked really closely with the GitHub folks on all this integration, and has a great new feature as well.

Xcode users can now directly clone and open in Xcode using this new Open in Xcode button.

[ Applause ]

Yeah. And that is Source Control and GitHub in Xcode 9.


[ Applause ]

Thank you, Mike.

Xcode 9 also includes some advancements in our debugging and runtime analysis tools.

I’d like to start with the view debugger.

View controllers play an important role in UI development, so we’re going to incorporate them into the view debugging experience.

Captures will now include view controllers in the hierarchy and draw them on the canvas above the views they manage.

Together, these will help you navigate and give you a sense of your view controller boundaries.

The view debugger will now also include details from SpriteKit scenes, and the view debugger is a perfect way to expand and rotate your scenes, scope ranges, even look at clipped regions.

We’ll also include support for SceneKit scenes as well.

And here you can use the familiar SceneKit editor to rotate the camera, navigate, and inspect objects.

So view controller’s SpriteKit and SceneKit are now great additions to our visual debugging.

[ Applause ]

Over the last few years, we’ve added runtime sanitizers to our debugging experience, and they have been wildly successful at helping track down issues.

In addition to advancements in our two existing sanitizers, this year, we’re adding two new runtime analysis tools, the Undefined Behavior Sanitizer and the Main Thread API Checker.

Programming languages have some undefined behaviors, such as what to do with misaligned pointers or when encountering signed integer overflows.

When these situations occur, they are very difficult to debug.

The Undefined Behavior Sanitizer catches many of these cases and displays details to help you investigate.

Now, the Main Thread API Checker is one you’re all going to love.

Calling UI-related APIs from a background thread is a common mistake and can lead to visual defects and random crashes.

The Main Thread API Checker catches AppKit and UIKit APIs not called from the main thread and then surfaces issues for you to investigate.

[ Applause ]

We love this one so much that we’ve enabled it by default.

So when you start debugging with Xcode 9, you’re going to get this behavior for free.

Now, our two new runtime sanitizers are available in the Diagnostic Options of your scheme, and all of our diagnostic tools are available for the Run and Test actions, making them perfect for use with continuous integration, which is also a perfect segue into our next section.

We believe that continuous integration is an essential piece of developing great software, so we’re making it easier to use Xcode Server by including all of the functionality in Xcode itself.

You no longer need to install the macOS Server app.

[ Applause ]

A new, simplified UI in the Preferences takes just a few clicks to get started, and we’ve integrated new provisioning work flows, including automatic and manual code signing to streamline your configuration.

Now, on the testing side, a popular request we’ve integrated is support for testing multiple applications.

Your UI tests can now branch out into many other apps, which is a great improvement for working with app extensions, settings, and other integrated work flows.

Additionally, you’ll find that we’ve focused on performance for UI testing as well.

UI testing queries run three times faster than they did before.

And if you adopt new query APIs, your own tests can run up to ten times faster than before.

[ Applause ]

But we didn’t stop there because you’ll also find testing with xcodebuild now defaults to running in parallel for multiple destinations.

[ Applause ]

This means your command-line testing and your Xcode Server bots can evaluate tests on many devices and simulators all at the same time.

And speaking of the simulator, the simulator app now supports multiple booted devices for iOS, tvOS, and watchOS all in one app.

[ Applause ]

Multiple simulator sessions is a huge productivity win when testing different configurations and when evaluating multi-client scenarios.

You’ll also find the simulator has a new experience.

We’ve added in a bezel around the side to give you familiar access to hardware controls and enable interactivity, such as swipe gestures from the edge.

And we’ve also made the simulator fully resizable.

[ Applause ]

So there’s one more feature in Xcode 9 that I’d like to share with you, and it starts by something that we’re taking away, and it’s the need to use one of these.

[ Applause ]

You know, I was going to say, I know you all love your USB cables, but that’s apparently not true [laughter].

But it doesn’t matter because in Xcode 9, it’s no longer necessary.

We are adding in wireless development.

[ Applause ]

With Xcode 9, you can use the connection type best suited for your needs USB, Wi-Fi, or Ethernet when connecting to your iOS and tvOS devices.

All of the development work flows are supported with wireless development.

In fact, you can also use wireless development with other apps, like Accessibility Inspector, QuickTime Player, and Console.

So this will be a natural addition to your development work flows.

So that’s just a taste of what we have for you in Xcode 9 a new Source Editor with refactoring alongside Swift 4; a new indexer, build system, and source control work flows integrating GitHub; and advancements in our debugging, analysis, and testing tools to go alongside wireless development.

All that and more is in Xcode 9, and that’s our tools update for today.

[ Applause ]

Next, we’d like to share with you some of the new APIs we have for you, and for that, I’m going to invite up Josh Shaffer.

[ Applause ]

Thanks, Matthew.

I’m really excited to tell you about so many of the new and enhanced APIs in iOS 11.

There’s a ton of new things, so let’s start with the most important new API, drag and drop.

Drag and drop is simple to use, with consistent interactions across the system, and of course, it’s also really easy to add to your apps as well.

While simple, it’s also incredibly flexible and customizable, and it takes full advantage of iPad’s multi-touch capabilities.

And of course, it’s secure by design, allowing access to the data being dragged only once a drop occurs.

Adding drag and drop to your apps is almost as easy as using it.

It’s completely automatic for standard text and web controls, so your apps may actually already support it.

Existing apps like Slack can already participate for example, by accepting drops of text in their existing text views.

For other cases, you’ll need to do a little work.

Now, if you’re displaying your content in a table view or a collection view, new delegate methods make it really easy to add drag-and-drop support with just a few lines of code.

Beginning a drag is as easy as providing the data, and accepting a drop is just as simple.

For everything else, there’s an easy-to-use, standard API that can integrate with all of your custom views.

By default, the API provides standard gestures and animations, but it’s fully customizable.

You can customize the lift animations that occur at the start of a drag, add multiple items to an already in-progress drag, generate custom preview images for each item, choose the badge to display on the dragged item to indicate that it will be moved or copied, and even provide fully custom set-down animations to play when the drop occurs.

The API is flexible enough that even the really custom, gesture-based event creation and reordering in the Calendars app was able to be re-implemented on top of the drag-and-drop APIs.

And the flexibility doesn’t just end with the appearance.

The data model is incredibly flexible as well.

It can work with any type of data that your app uses, allowing you to support really rich interactions.

For example, locations dragged out of maps include rich location information and really detailed drag previews.

Now, with drag and drop, even existing interactions get better.

Reminders uses UITableView’s built-in reordering support to allow you to quickly reorder and organize your tasks.

You can begin a drag to drag an item, to move it to a different spot, and now even pick up multiple reminders and move them between lists so that you can really quickly organize all of your tasks.

Because all of this is built on top of a common API, it works seamlessly together as well.

You can bring out Calendar, and begin dragging one of those events, and drag it over into Reminders, where you can drop it to quickly add it to your to-do list.

Now, while a drag is in progress, the system handles that touch, leaving your app fully interactive and free to respond to any other touches.

This makes it really easy to support even complex interactions that require navigation within your app, such as moving photos between albums.

Now, on the iPad, the entire OS remains fully interactive, allowing you to pick up multiple items in one place and then navigate freely around the system, even switching apps, bringing those dragged items with you so that you can use them somewhere else.

During all this navigation, the data behind the drag is being protected from unintended access by any other apps that you’re dragging it over.

Until a drag is completed, only metadata is made available to potential drop destinations.

When a drop does occur, the data is quickly made available to the receiving app.

Now, the security provided by this lazy delivery doesn’t have to come at the cost of performance either because even really large files can be transferred almost instantly with APFS cloning.

Now, this is all just the beginning.

To show you some other exciting use cases and to give you an idea of how easy it can be to add drag and drop to your apps as well, Eliza Block will now come up and give us a demo.


[ Applause ]

Thanks, Josh.

So I’d like to begin by showing an improvement that we’ve made to the iOS Home screen by leveraging the drag-and-drop API.

So here on this iPad, I can press and hold to begin reordering my apps.

For example, I could create, I could drag reminders over the Calendar to create a Productivity folder.

Now, in iOS 11, we have completely rewritten app reordering using drag and drop, and we’ve made it a whole lot more powerful.

Let’s find some more productivity apps to add to this folder.

Here’s Pages.

I’ll begin dragging it, and as I do, the Home screen remains fully interactive, so with my other hand, I can swipe between pages, navigate in and out of folders, and the best part is I’m no longer limited to dragging a single app at a time.

I can tap Keynote to add it to my ongoing drag, close the folder, and let’s look around and see if we can find Numbers.

[ Applause ]

There it is, so I’ll tap it, grab it, and now I can drag all three of these apps back over to the first page and drop them into my folder in one fell swoop.

[ Applause ]

So that’s app reordering with drag and drop, and you can do this kind of thing in your own applications too.

To see an example, let me launch this app called Byte Swap that my team has written to help me view and organize my Swift Playgrounds trading cards.

So it’s composed of a Swift, sorry, a split-view controller, and on the right here, you can see that my cards are displayed in a collection view using a basic flow layout.

Now, I think it would be cool if I could pick up these cards and drag them around to reorder them, move them between my different albums, and so let’s actually go ahead and implement that now.

I’ll switch over to Xcode, where I have the Cards View controller open.

Now, because Table View and Collection View already contain built-in support for drag and drop, adding drag and drop if you’re using one of these classes is really simple.

I’ll start by just declaring conformance to the new UICollectionView drag/delegate protocol.

When I do that, Xcode helpfully informs me that I have not yet implemented one of the required methods in that protocol, and I can tap the issue and accept the Fix-it to add the protocol [inaudible].

So this method, Items for Beginning Session, is prompting me to package up the data for the item being dragged, and I’m going to do that by calling out to a helper method that I wrote before, which just packages up the data as an image and makes it draggable.

That’s actually all I have to do to get the bare bones of drag working in this application, but I also want to make it possible to tap to add items to an ongoing drag.

So I’m going to add a second method to do that, and then we’ll go ahead and run this.

I’ll switch over to my iPad.

Whoops. And here it is.

As you can see, now I can pick this guy up and drag him around.

But I can’t reorder yet because I haven’t implemented drop in this application.

However, I still can freely interact with the app and I can pick up a second item to add it to my existing drag, and this is actually more useful than it looks because, in addition to being able to drag these arounds, I can interact with other apps which may have implemented drop already.

For example, Mail.

So I can go into Mail, tap to compose a message, and then drop these two cards in the message just like that.

So that’s with only two delegate methods added to my existing application.

All right, switching back over to Xcode, let’s go ahead and implement drop.

So I will start once again by declaring conformance to a new delegate protocol, UICollectionView drop delegate.

And Xcode will once again offer to fill in the missing protocol step.

So this perform drop method is a little bit more involved.

I have to update my model for the, to, by receiving the new drop data, and, also, I’m customizing the animation here a little bit with a helper method.

And there’s one other method I need to add as well, which is called as I drag my finger over one of the drop targets, and it, that’s what causes the Collection View to open up a gap so that I can reorganize within a particular list.

So switching over to my iPad again, this time when I pick up a card, the other ones move out of the way, and I can still switch between lists, grab a couple more, and then I can drop them all together in my Favorites album.

So easy. So that’s adding drag and drop to your applications.

And switching gears, we’ve been working with a number of partners who have been working on getting drag and drop in their applications.

And one partner that has really stood out for their creative use of this API is Adobe.

So now, I would like to invite up Brooke Francesi from Adobe to show you what they’ve built.

[ Applause ]

Thanks, Eliza.

I cannot wait to show you what drag and drop can mean for the creative professional.

This is Adobe Photoshop Sketch, and it’s a focused painting and drawing app with some of the best natural media brushes, including all of My Photoshop Brushes.

I’ve been working on this illustration, and before I finish it up, I want to add a few elements to the background.

And in order to do that, I’m going to open another Adobe app called Capture.

Capture is an app that allows me to capture inspiration from the world around me to create things like patterns, color themes, and even custom brushes, which I’ve been using to work on this project.

Now, in order to add some elements to the background, I need to load the color theme into my toolbar in Photoshop Sketch.

In the past, I would’ve had to go find all those colors, manually load every single one of them into my toolbar, go back, find the brushes.

You get it.

It’s a pretty long and arduous process.

But with drag and drop, I can select all five colors at once, navigate to my Brushes tab, select the two brushes that I’ve been using to work on in this project, and simultaneously drag them over into my toolbar.

Pretty awesome, right?

[ Applause ]

Something that would’ve taken me a ton of steps before took me just seconds.

We even took advantage of the API to create our own custom animations.

Once we got our hands on the new drag and drop, we were amazed at how simple and straightforward it was to implement.

Within just two days, we had one engineer who had the basic API feature working in one app, and so we just decided to keep going.

The possibilities with drag and drop are literally endless.

The cool thing is is I’m not limited to just images, and brushes, and color themes.

I can actually drag and drop layers between multiple apps.

Now, I want to see what this is going to look like in Context.

Actually, let me draw a little bit on here.

I want to see what this is going to look like in Context, and so I’m going to take these layers that I’ve been working on by selecting them, I’m going to open up my Dock, and I’m going to drag them into another Adobe app, Photoshop Mix, which is an image compositing tool.

Now, what’s really cool is once I’m in here, I have options.

I can target any specific coordinates on my canvas if I know exactly where I want those layers to be, or I can just drag them right into my layer stack.

That’s pretty awesome, right?

I never get tired of seeing that happen.

[ Applause ]

Now, in case you’ve all been asleep for the past two minutes, I just dragged three completely different types of assets between three completely different applications.

And that’s why drag and drop isn’t just saving steps in my process.

It’s an entirely new mobile work flow for creative professionals on the iPad.

Thank you.

[ Applause ]

Thanks, Brooke.

Drag and drop is a really exciting, new API, and we can’t wait to see what you’re all going to do with it, but there are a number of other enhancements in iOS 11 that you should be aware of as well.

So let’s start with some updates to the user interface.

The top level of most applications now features a large title featured prominently in the navigation bar, along with a new integrated search field design.

You’ll find this at the top of most applications, including Messages, Mail, the new App Store, Photos, and many more.

And enabling this appearance in your own apps couldn’t be any simpler.

With just a few properties, you can adopt and control the appearance of the new large title and adopt the new integrated search field.

You’ll also find that in iOS 11, UITableView has now enabled self-sizing cells by default, helping to make it even easier for you to have great support for dynamic type.

[ Applause ]

This has also helped to make iOS 11’s UI more responsive to text size changes than ever before.

And with a wide range of dynamic type sizes available, your users are going to choose the size that’s most comfortable for them.

Dynamic type is a really commonly used user preference, so you definitely want to make sure that you test with it and respond to it well.

And don’t forget to design for everyone.

Tests with a larger accessibility size is enabled as well because in these instances, your table view rows will get even taller, so you’ll want to make sure that your content lays out well and remains legible, even at those really large sizes.

Next, let’s talk about files.

File management has been completely revamped in iOS 11.

Now, of course, you can access files from the new Files app that you heard about this morning, but, even better, a new document browser with all of the same capabilities can be brought up and displayed right within your own apps.

You can filter the types of files that will be displayed within the browser.

Excuse me, add custom actions right into the navigation bar, and even allow the creation of new documents right from within the same interface.

And of course, you can also customize its appearance so that it best fits in with the rest of your applications.

The document browser also offers really quick access to iCloud Drive’s new document sharing support, enabling easy collaboration across iOS and macOS.

Using NSFileCoordinator, you can easily find out about changes to these documents as they occur, even if the changes are made by another user.

And on macOS, NSDocument includes built-in support for file sharing right within your apps.

Finally, let’s talk about iOS 11’s new multitasking features, enabling your apps to participate in the new multitasking work flows enabled by the new floating Dock, slide-over apps, pinned apps, and the brand-new App Switcher is really straightforward.

If you’ve adopted size classes, flexible layouts, and default storyboards over the last few years, then you’re actually already done.

iOS 11 builds on top of these existing technologies to enable a whole bunch of new work flows.

Now, if you haven’t adopted them yet, there’s never been a better time.

That’s all just the start.

There’s a ton of other new and enhanced APIs available in iOS 11.

For example, password autofill makes it really quick and secure to get login credentials out of iCloud Keychain and right into your apps, which can really streamline the login flow for your applications.

It’s really great [applause].

Named color assets and vector assets can now be stored in your asset catalogs, and UIFontMetrics make it really easy even to have your own custom fonts participate in dynamic type.

[ Applause ]

In addition to all of this, there’s also some really great enhancements to APIs for various Apple services, so let’s talk about just a few, starting with iMessage.

With the new app strip in iOS 11, Messages apps are now more discoverable and they’re a lot more powerful as well.

Your apps can now render live content in the message bubbles right in line in the conversation.

And with the new direct send API, you can remove a common point of friction for your users, making it possible to really easily send messages with just a single tap from right within your app extensions.

Next, SiriKit.

The list of supported domains has expanded to include a number of new domains, including payment accounts, lists, notes, QR code display, and more.

And if you missed it, earlier this year, we greatly simplified SiriKit development with the addition of simulator support for both iOS and watchOS.

And finally, Apple Music.

With MusicKit, adding music playback to your apps has never been easier.

Your users can now play any of the 40 million songs available to Apple Music subscribers right from within your apps.

This is especially great for fitness apps, now enabling full access to songs, playlists, and radio stations without ever interrupting your workout.

Now, these are just some of the enhancements to the APIs for Apple’s various services.

Next, to tell us about some of our exciting new photo and imaging APIs, Sebastien Marineau-Mes will come up to the stage.


[ Applause ]

Thank you, Josh.

Thank you.

Really excited to be here.

I’ve got a lot to cover today, so let me get started.

First, let’s talk about our new photos project extension on macOS.

You’re now able to bring creative, project-based features directly to photos.

Users can easily discover your new extension through a Mac App Store.

And once you’ve got that wired up, users can select photos, memories, or albums and then choose your extension from the Create menu.

The Photos app [inaudible] extension to not only images, but also the rich context so that project creation feels completely natural.

Additionally, we allow you to persist your own project data.

Now, your extension runs within the Photos application, so it really feels like a seamless experience.

We, of course, designed it to support print products.

Here’s a great example of this.

But you could do a lot more with this.

Here’s an example of someone that’s built a web publishing application or even directly ordering framed prints from your favorite photo.

So we think our users are really going to love the new extensions that you’re going to create with this extending photos.

Next, let’s talk about Camera.

In iOS 11, the camera has a lightweight QR detector, and all you need to do is point your camera at one.

The QR code is automatically decoded and will offer to open a deep link into the appropriate app.

There you go.

[ Applause ]

Now, it’s really simple to use as a developer.

You simply need to adopt Universal Links, and we even support common QR formats, such as location, contacts, and Wi-Fi networks.

Next, let’s talk about compression and formats.

We’ve, we’re making really big advances here this year.

Now, of course, JPEG is ubiquitous and has served us really well over the last 25 years, but today we know that a number of technologies offer better compression and flexibility.

And similarly for H.264, which is over a decade old.

Now, compression’s important, but we must consider other requirements, so let’s look at some of these.

So for example, today the line between photos and videos is blurred, and a lot of what we capture is actually a combination of both of these types of assets.

We also have new sensors that capture much richer images and display them, displays that bring them to life.

And finally, whatever we choose has to keep pace with ever-increasing resolutions.

And so for the codec, we’ve selected HEVC because it meets all of these requirements.

It offers up the 2x compression at equivalent image quality, and when you add up the billions of photos and videos, or trillions I should say, that adds up to a lot of saved space.

HEVC is also hardware accelerated across many of our devices.

And finally, it supports photos, videos, 4K and beyond, and a number of new technologies for capture and display.

We’ve also selected a new image container called HEIF, or, as I like to call it, “heef.”

You can repeat after me, “heef.”

HEIF supports the concept of compound assets, so in a single file, you can have one or more photos or one or more images.

You can have videos.

You can have auxiliary data such as alpha and Depth.

It’s also highly extensible.

It supports rich metadata, animations and sequences, and other media types, such as audio.

And finally, HEIF is an ISO standard, which is critical for ecosystem adoption.

Now, our standard APIs provide direct support for HEVC and HEIF.

And generally, it’s transparent.

No opt in is required.

But there may be cases where you want to explicitly control the format.

So for example, if you’re trying to play a high-resolution movie on an older device that doesn’t have hardware acceleration, in this case, we’d provide you an API that you can use to determine whether a given file in this case, an HEVC movie would play well on this device.

And if it doesn’t, we can automatically fall back to one of the older formats to maintain a great user experience.

So now, within our ecosystem, we automatically use HEVC and HEIF as we share between all compatible devices.

But where, when we’re unsure that the recipient can decode these new formats, we’ll err on the side of compatibility and automatically transcode the JPEG and H.264.

An example of this would be putting photos in an email attachment.

I’d really encourage you as you adopt these new formats to be thoughtful about compatibility as you build this into your apps.

So that concludes compression, and let me now talk about another new topic, Depth.

Now, we’ve all see Portrait Mode on the iPhone 7 Plus, which generates the Depth effect using the two cameras.

Here’s how we do it.

We capture simultaneous images from both cameras and we then use stereoscopy to compute Depth.

This is the same as human vision.

We store this in what we call the depth map.

So in Portrait Mode, we take the photo, the depth map, and then we use this to blur the objects that are in the background.

But what’s really cool is that in iOS 11, we’re now storing the depth map as part of what we capture, and we’re giving you and your app access to the photo and the depth map.

So you can load it up and then you can use this to do your own creative effects.

And let me show you a couple of examples of this.

So I might take this photo and I might decide to turn the background into monochrome, really highlighting the subject.

I might apply a more artistic effect.

Maybe I’ll replace this with this pumpkin patch with blur effect.

And you can imagine even more creative effects.

In fact, these are just examples.

The possibilities here are endless.

Now, this is Depth for still images.

We’re also giving you access to a lower-fidelity, real-time Depth stream using the Camera APIs.

And this lets you build Camera-based applications that will use Depth in a novel way.

So Depth is really a simple extension to both our still image and Camera capture APIs, and we look forward to everything that you’re going to be building on top of these new APIs.

Really, really exciting.

Next yeah.

There you go.

[ Applause ]

Next, let’s talk about our new Vision APIs.

Now, of course, computer vision and machine learning already underpin many of the features in Apple’s products.

You saw that this morning.

You’ve also seen apps in the Store that use computer vision.

One of them is the hot dog detection app that was featured in HBO’s Silicon Valley.

But of course, these apps have all had to roll computer vision, the computer vision technology on their own.

And what we’re doing this year is bring all of our great built-in capabilities and making them available to all of you.

So things you can do face and landmark detection.

We’ve seen the rectangle detection.

Text. Here’s another one with barcode detection.

And object tracking.

And many, many more things.

And one of those is actually the integration between Vision and Core ML.

And that allows you to bring your own machine learning and computer vision models and run them as part of Vision’s imaging, image processing pipeline so that we can extend our vision framework in very novel ways.

And to show you an example of this, I’m going to invite Emily Kim up on stage that will show you, show us a great demo app.


[ Applause ]

Hi. I’d like to show all of you a demo app that we’ve written to showcase three of the new technologies in iOS 11 Core ML, Vision, and Depth.

Now, I’ve asked Seb to help me out a little bit in this demo, so it looks like he’s just about ready.

All right.

So first, we’re going to launch the application.

You can see here that the app has recognized that Seb is holding a piano keyboard, and we’ve placed a speech bubble next to his mouth with a little piano emoji inside.

Now, let’s take a quick photo.

And, well, I guess that’s a picture of Seb holding a keyboard.

I hate to break it to you, Seb, but you look a little boring.

Can we maybe try something else?

All right.

I mean that in the kindest way possible.

All right.

Let’s see if our app can recognize something that’s not piano.

So we’ll go back.

All right.

We can see now that Seb’s holding a guitar.

The app knows this and has placed a little guitar emoji in the speech bubble next to his mouth.

So we’ll take another photo.

Whoa, Seb, you look way cooler than you did before.

You’ve got some [ Laughter ]

[ Applause ]

Yeah, if you’ll notice, he’s got some cool rock star sunglasses and some cheering fans behind him, so before we jump over to Xcode to take a look at just how easy it was to write an application that can make even Seb look cool, let’s please give a round of applause for Seb for being a good sport.

[ Applause ]

All right.

So this was a pretty standard UIKit and AVFoundation application, so I’m not going to go over the details of that.

Instead, I’d like to focus on the three new areas that you saw: object classification, face landmark detection, and Depth.

So first, let’s take a look at object classification.

Using the power of Core ML, you can now integrate trained machine learning models into your app.

Now, many of you will probably be rolling your own.

However, we wanted to show you just how easy this was, so we simply took an off-the-shelf model and dropped it into our project.

Super easy.

You can see here that all we had to do was load this model file in Core ML and then pass that along to Vision so that we could run this against images.

Now, as the frames come in from the Camera preview, we simply create a Core ML request to run on each of these frames.

Once we recognize an object for example, the guitar the results come back from this request indicating that it’s found a guitar in the scene, and then we can place that little guitar emoji inside the speech bubble.

Now, if you also recall from the demo, the speech bubble was placed right next to Seb’s mouth.

So how did, how exactly did we know where to put that?

Let’s take a look at the face landmark detector.

Using Vision, you can now find the face landmarks, which include for each of your eyes, your nose, and what we’re interested in, which was the mouth.

So again, we simply create a face landmarks request and run that on the frames that are coming in from the camera.

When the request comes back and says that it’s found landmarks on a face, we simply ask for where the mouth was and then we can anchor the speech bubble appropriately so that it looks like it’s coming out of Seb’s mouth.

We also used face landmarks to find each of Seb’s eyeballs to figure out where to put the sunglasses on his face in that cool guitar photo.

Now, also in that photo, if you remember, we replaced the background and put it in front of his adoring fans.

So we used Depth to do that, and let’s take a closer look.

So it’s really easy to get Depth data now when you capture images on the camera, so all you have to do once you’ve captured the image is extract that Depth information, and then we wrote a custom kernel to simply apply that Depth information to the captured image in order to teleport Seb into his rock star universe.

So I hope you’ve seen just how easy it is to incorporate Core ML, Vision, and Depth into your applications.

Back to you, Seb.

[ Applause ]

I have to say that rock star universe was pretty exciting for the couple minutes I got to live in it.

So thank you, Emily.

That was really great.

Now, let’s look at Core ML more closely.

Core ML is really about helping you create new experiences in your app by making it easy for you to incorporate machine learning models.

And so your application can make use of domain specific frameworks.

We’ve seen Vision.

We also have natural language processing.

Or directly use the machine learning framework.

Now, all of this is powered by Accelerate and Metal Shaders to really give you the best possible performance.

Core ML itself provides a rich set of primitives to integrate state-of-the-art machine learning models into your apps.

We’ve seen some of those this morning.

So in addition to supporting extensive deep learning models with over 30 layers, it supports standard machine learning models, like tree ensembles, SVMs, generalized linear models.

Really, the point of this is to allow you to target a broad variety of applications, from handwriting recognition.

We talk about image classification, music tagging, handwriting recognition, and much, much more.

Many, many other domains are available to you.

Now, how do you get the models?

Well, we give you access to this great machine learning community with tools such as Turi, Caffe, or Keras, which is a popular library for TensorFlow.

Now, what does your work flow look like as a developer?

Well, let’s say that you train your own models using Caffe, or perhaps, as Emily did, you pick a pre-trained model that’s available on the Internet.

You then take that model and run it through the Converter tool, and that Converter tool will produce a Core ML compatible format.

You take that, you drag it into Xcode, and then you simply start using it in your application.

Now, we’re also open sourcing the Core ML Converter tool so that we can get even broader community adoption over time.

We’re also focused on performance so that it runs great on device.

Examples of what we do: graph optimizations and fusing multiple operations together automatically.

And we also automatically analyze your neural network to make best use of hardware acceleration.

So when you bring all of this together, Core ML really allows you to easily leverage the power of machine learning to create truly intelligent applications.

It’s simple to use, it delivers great performance and efficiency across all of our platforms, and it really lets you target your next-generation intelligent app at all, at hundreds of millions of devices that are Core ML enabled.

So we really can’t wait to see what you’re going to create with this along many of the other APIs that we’ve talked about today.

So with that, I’d like to call up Jeremy Sandmel up on stage to talk about Metal and graphics.


Thanks, Seb.

[ Applause ]

Thanks, Seb.

So we developed Metal to be the fastest and most efficient way to drive the incredibly powerful GPUs in our iOS, macOS, and tvOS products.

And since its introduction, we’ve dramatically expanded Metal to support the advanced rendering, graphics, and compute features most requested by our developers.

And the results have been amazing.

There are more than 143,000 Metal apps in the iOS App Store created by our developers who are directly calling the Metal API.

But we’re also building our own system frameworks on top of Metal.

You’ve heard about a number of them today.

For instance, if you’re drawing your user interface with UIKit or your maps with MapKit, you’re also using Metal.

In fact, over 1.7 million apps are in the App Store using Metal through Apple system frameworks and benefitting from Metal’s performance and power efficiencies automatically.

Developers are using Metal in truly awe-inspiring ways, such as the professional image and photo editing in the new Affinity Photo for iPad from Serif Labs.

And 3D modeling and rendering in MAXON’s upcoming Cinema 4D with AMD’s Metal accelerated ProRender technology.

The gorgeous sci-fi adventure game Obduction from Cyan, the legendary creators of the game Myst.

And the truly stunning F1 2016 racing game from Feral Interactive.

In fact, Ian Bullock of Feral has said that, “Metal’s richer feature set and lower overhead have allowed them to bring cutting edge games to the Mac with frame rates and effects that simply weren’t possible before.”

Metal is broadly supported across i, our iOS devices since 2013 and our Macs since 2012.

And this means there are more than 900 million products supporting Metal across macOS, iOS, and tvOS.

This is truly stunning, and that’s why we’re so incredibly excited today to announce the next generation of Metal, Metal 2.

Now, Metal 2 is made up of six key advancements that we’re going to describe to you today.

The first is GPU-driven rendering.

This means that we’re continuing to reduce the amount of work required by the CPU in order to execute your GPU commands.

Now, you may recall that the overhead of OpenGL could often steal a major portion of your app’s per-frame rendering time, significantly limiting your application’s performance.

And Metal dramatically reduced that CPU time, giving it back to your application.

Well, with Metal 2, we’re going even further.

We’ve introduced a number of new features designed to off-load the CPU as much as possible and to allow the GPU to much more efficiently schedule its own work.

One of these features is called Metal argument buffers.

Now, as a little bit of background, Metal render passes are made of graphic state and references to resources, such as textures and buffers.

And these resources traditionally needed to be specified individually with each draw call.

With Metal 2, you can assign your resources to argument buffers once during your app’s initialization, and then you can rapidly switch between these argument buffers that you’ve already set up with each draw call, which can be dramatically more efficient.

In fact, the more resources you use, the more complex your rendering, the greater the savings can be.

And in this example using Metal argument buffers, we reduced the CPU time required for our draw calls by more than 10x.

Next, Metal 2 has taken major steps forward towards a common feature set across our product families.

We’ve strongly emphasized compatibility across our platforms while still providing developers with access to hardware specific features required for optimal performance.

We’re bringing key features from macOS to iOS and vice versa.

One of these examples is called Metal Resource Heaps.

This can provide a much more efficient way of managing your Metal textures and buffers without requiring you to become an expert in how each individual GPU manages and allocates memory.

Traditionally, each Metal texture would require a separate, individual memory allocation from the OS.

Now, this was simple to use.

It’s simple to understand but could be quite expensive.

But with Metal Heaps, you can allocate a single memory buffer and then store multiple textures within it.

You can also do incredibly fast reallocations and you can reinterpret existing memory for a new texture.

Interestingly, Metal Heaps also allows you to reuse previously allocated memory from textures you don’t need simultaneously, which can save you significant amounts of memory per frame.

This is all very simple and easy and doesn’t sacrifice any performance.

Now, Metal is not just for graphics.

The Metal Performance Shaders Library provides an optimized set of image processing, linear algebra, and other compute operations.

And Metal 2 extends this support for accelerating machine learning.

We’ve included a number of GPU-accelerated machine learning primitives and new graph API for convolutional neural networks, and it can all be efficiently integrated into your Metal graphics and compute code.

Machine learning kernels such as LSTMs, convolutions, and neuron layers are included as well as an expanded set of matrix math operations.

And most importantly, MPS also enables everyone using the powerful, new Core ML framework that Seb just described to get the full performance of the GPU for machine learning automatically.

Next, we’re very excited to talk about VR in Metal 2.

With Metal 2 and macOS High Sierra, we’re enabling VR content development on the Mac for the first time with optimized support for 360-degree video editing and 3D content creation.

And we’ve added support for low-latency stereoscopic rendering on head-mounted displays and powerful, new developer tools to optimize your VR app’s performance.

We’re incredibly excited to say we’ve been working closely with Valve, and they’ve announced they’re bringing their full SteamVR runtime and SDK to the Mac with beta versions available today.


[ Applause ]

This also includes support for the HTC Vive headset and controllers.

And as you saw this morning, we’ve also been partnering with Epic Games, who’s announced macOS VR support in Unreal Engine in their upcoming release later this year with early access available from GitHub starting in September.

This includes support for Epic’s immersive VR Mode Editor that you saw this morning, allowing you to step inside the VR environments as you’re creating them.

We’re also very excited to be partnering with Unity, who has announced they too are adding macOS VR support to their incredibly powerful engine and editing environment behind many of the most popular games.

This has enabled the developer I-Illusions to use Unity to bring the addictive and exciting VR space game Space Pirates Trainer to the Mac in just a matter of just a few days.

Now, a word about performance, which is incredibly important for building compelling VR apps.

Many developers are familiar with optimizing their applications to fit within 60 frames per second, and this would mean you would have a fairly generous 16.7 milliseconds per frame all devoted to your application’s rendering.

However, to create an immersive VR experience, you need to render a full stereoscopic pair of views at 90 frames a second, which can leave you only about 11 milliseconds per frame.

But actually, you don’t even get all of that quite all to yourself because there’s a VR compositor in the mix, which uses the GPU to combine the left- and right-eye images to compensate for HMD lens distortion and for head movement.

And this leaves you only about ten milliseconds per frame.

In other words, you need to target your VR app to achieve 100 frames per second.

Now, this can be very challenging, but Metal 2 provides some awesome, new tools for optimizing your VR app’s performance.

We’ve include built-in support for SteamVR’s compositor tracepoints within the Metal system trace.

This allows you to see exactly where the GPU time is going and precisely when your frames will hit the glass.

With Metal 2 and the powerful, new GPUs in the latest iMacs that we announced that we’re shipping today, we fully enabled VR content development on your Mac desktop.

Now, many professional content creators love the mobility of working with our MacBook Pros, but intensive VR development can require powerful GPUs that don’t always fit in the thinnest and lightest laptops.

This is why Metal 2 is bringing first-class support for external GPUs to your MacBook Pro and to macOS High Sierra.

Yeah, thanks [applause].

So you could very easily add support to your app for external GPUs.

It takes just a few lines of code, and you register for external GPU device connection callbacks and identify which GPUs in your system are removable.

This enables the best of both worlds the GPU horsepower for building immersive VR applications on our most mobile Macs and we’re really excited about it.

So to get you started, we’re offering an external GPU Developer Kit through our Developer program.

It includes everything you need for VR content development on your MacBook Pro, including a discount on the HTC Vive head-mounted display, and it’s all available today from

And last but not least, Metal 2 enables and provides our most advanced graphics developer optimization tools yet.

We’ve added the top-requested tools and features to enable rapid debugging, analysis, and optimization of your GPU performance bottlenecks.

And it’s all integrated directly into Xcode and Instruments.

We’ve included powerful, new GPU hardware performance and software performance counters, giving you deep insight into the GPU’s operation and automatic bottleneck detection that can take you exactly to the source of your performance problems.

It’s all very powerful.

And to show you a demo of Metal 2 Developer tools in action, I’d like to invite Eric Sunnel [phonetic] to the stage.



Thanks Jeremy.

Hi, everyone.

Today I’d like to show you a couple of the new features in Metal, like our GPU profile counter tool in Xcode.

So what I have here is a Metal application rendering thousands of asteroids on the screen.

And it’s, all is individual draw calls, and it’s important to note that each of these draw calls is using a number of unique Metal resources all chosen at runtime.

So now what I’m going to do here is add a couple more asteroids to the scene.

All right, that’s not performing quite as well as I want, so let’s exit out of full screen, and take a GPU capture, and see if we can get a better idea of what’s going on.

So not only is it taking a GPU capture and analyzing all of our shaders, but now it’s also capturing all of the hardware performance counters and actually on a per draw call basis.

It’ll be able to visualize that in a second.

All right, so now on the left-hand actually, let me change my system editor here.

And on the left-hand side, now I have access to a GPU report that now brings up in the standard editor a summary of all the performance counters on the timeline for all of the draw calls.

And on the right-hand side, I have all the detailed performance counters as well.

So if I select a particular draw call let’s say this one for instance all of the right-hand side updates with all the detailed performance counters that we’ve captured.

If I hover over the vertices row, here I can get an idea of, OK, what’s the summary for that draw call for the vertices submitted to the scenes?

Almost 8000.

Seems pretty high.

If I look at the pixels stored, it looks like I’m only storing about 11 pixels for that draw call, which seems a lot lower in comparison.

So now, if we look in the upper right-hand side, we can see some of the recommendations provided by Xcode.

The top one here, pixels per triangle ratio, is very low.

Seems about right.

Now, if we disclose this, we get some of the recommendations made by Xcode.

So we can see here it says, “Consider reducing the number of triangles and consider reducing the work in the vertex shader.”

Now, if we click on the top one and click on the link here, it’ll take us right to all the bound resources for the draw call.

We can see here that we’re actually loading our high poly model of the asteroid, which doesn’t make much sense for this scene.

So we’re going to go over here and make a source code change.

Get rid of the editor.

All right.

And let’s change this to the reduced model that I’ve prepared ahead of time.

All right.

Now, let’s make one more change in this project.

Let’s see if we can make useful argument buffers.

So what I have here is our for loop that’s making, that’s doing the per draw call for each asteroid.

And above each draw call, I’m setting a number of resources.

So with argument buffers, I can remove this code and, instead, insert a call to binding our argument buffer ahead of time and basically allow the GPU now to make the dynamic selection of which resources to use.

So with these two simple changes in place, let’s run and see where our performance is now.

All right.

OK, that’s looking good.

Let’s add some more.

OK, that’s looking a lot better.

That’s exactly what I like to see.

All right, so the, yeah, with the new features in Metal 2, we hope you get deeper insight into where your application is spending time, and we’ll offer even more code from the critical path.

Thanks very much.

Back to you, Jeremy.

[ Applause ]

Well, thanks, Eric.

So that’s Metal 2 GPU-driven rendering, a unified feature platform, machine learning acceleration, support for VR content development on your Mac with support for external GPUs, and our most advanced developer optimization tools to date.

We can’t wait to see what you’re going to build with Metal next.

Thank you very much.

And now, I’d like to invite to the stage Vice President of AR and VR Technologies, Mike Rockwell.

[ Applause ]

Thanks, Jeremy.

So I am incredibly excited to talk to you about ARKit, Apple’s new framework that enables the creation of augmented reality apps on iOS.

One of our primary goals in creating ARKit was to make sure we could support a broad range of devices from day one.

We didn’t want to require specialized hardware, and I’m happy to say that we achieved that.

ARKit runs on iPhone 6s or later and iPad Pro and later.

That means it will run on hundreds of millions of devices from day one.

It makes iOS the largest AR platform in the world.

So what do you need to do to create great AR?

The first thing you need to know is where the camera is in space and where it’s looking.

We use a technique called visual-inertial odometry to do that.

Let’s see how that works.

So we’ve got our scene and we’ve got our camera.

And the camera’s looking at the world.

We identify feature points in the world and we track them from frame to frame.

From that, we are able to back-calculate the location of the camera, and create a coordinate system, and then give you that location in real time at 60 frames per second.

Now, if we were going to do that calculation at 60 frames per second, it would take a lot of CPU horsepower.

What we do instead is we fuse the data from the accelerometer and the gyro with that tracking information and we’re able to reduce the compute load dramatically.

In fact, to get this information only takes a fraction of a single CPU.

That leaves the GPU available to you to do fantastic rendering of 3D objects.

The second thing you need is to be able to understand the scene.

So what’s in there so that you can put things in there and have them integrate and feel natural.

We use those feature points that we tracked earlier to identify major planes in the scene.

We look for ones that coplanar, and then we find the extents of those planes across the scene.

That allows you to integrate objects and have them be completely natural.

So I can, for example, set that vase on the table, and it feels like it’s in the scene itself.

Now, if you didn’t know what the size of the table was, your vase might come out and be gigantic.

So in addition to identifying those planes, we make sure that the coordinate system is accurate.

In fact, it’s accurate to within 5 percent, so your objects feel like that they’re natural in the scene.

The final thing you need in order to enable those objects to feel right in the scene is to have the lighting be accurate.

So we provide accurate light estimations so that as the scene would darken, you can darken your objects as well.

If you don’t have that, then objects appear to glow in the scene and they won’t appear natural.

So let’s look at how you can access this incredibly powerful framework.

It’s really very easy.

All of this is available through ARKit.

All you have to do is create a session configuration, include things like whether you want to have plane estimation, whether you want to have a light probe, and then you start an AR session.

Once that’s started, your frames will be either delivered to you in the AR session delegate or you can access them as a property off of the AR session.

Each one of those frames contains things like a time stamp, the captured image, the camera direction and position, the major planes in the scene, and the light probe.

So now that you’ve got your scene set up, you’re ready to render.

And ARKit supports a broad range of options for rendering.

You can use Metal 2, as Jeremy just talked about that incredibly powerful low-level engine for accessing the GPU but you might want to use a higher-level framework because that’s what your abstraction would like.

So we’ve optimized both SceneKit and SpriteKit to work with ARKit, and it’s very efficient.

In addition, we’ve been spending the last few months partnering with the large 3D engine providers to make sure that their engines were optimized for ARKit.

And I’m really happy to announce that Unity is releasing an open source plug-in this week that works just like ARKit, so it’s incredibly easy to use.

In addition, Epic is releasing support in the Unreal Engine so that, in a developer preview today, and they’ll be shipping it later on this year.

So that’s a little bit of a preview, a look at ARKit.

I’d like to invite Martin Sanders from LEGO to the stage to give us a little demo of some of the fun things they’ve been working on.

Thank you.

[ Applause ]

Hi, everybody.

Great to be here.

At LEGO, we’ve been using ARKit to develop some fun, exciting experiences that can bring your LEGO creations to life in real-world environments.

So it all begins with the automatic plane detection.

So as that helicopter moves around, it gives us that surface, and we know we have that surface to start building.

So let’s go ahead and add some sets from our recent collection and The LEGO Batman Movie.

So here we have Arkham Asylum, and with those realistic shadows and that dynamic lighting, it really adds that sense of realism.

And on top of that, when we put in things like animations and effects, it really adds that sense of magic and life to them.

And with size estimation, all of the sets that we bring in come out one-to-one scale, so we know that they’re actually matching LEGO sets.

So this is looking like a cool Gotham City scene, but it definitely feels like something’s missing, doesn’t it?

So let’s go ahead and add the one and only LEGO Batmobile.

And the man himself, the Caped Crusader, LEGO Batman, of course.

Now, it’s a really cool model to check out.

There’s no doubt about it.

But we can go a little further here.

We can start investigating how models are actually constructed and made by exploring it in this view.

Now, Batman doesn’t look too happy about this, so let’s put that back together for now, I think.

Now, we also have the power to scale up our models.

Now, imagine what it’s like when you can go inside your LEGO creations and explore and view them from new perspectives.

And with the Batmobile at this scale, who wouldn’t want to go inside and check out all the cool gadgets and cockpit things from there?

[applause] And maybe even involve some friends in there as well.

I don’t know.

Get some selfies.

It’s fun stuff.

So as our LEGO sets start to come to life and animate, we can even then take on the role of being the moviemaker.

So imagine now capturing all of those scenes and those epic shots as we go into [inaudible].

[ Music ]

Full time.

At LEGO, we’ve been looking into augmented reality for quite a few years.

Yeah, yeah.

Pretty epic shot, right?

[inaudible] time.

[ Applause ]

And we’ve had plenty of ideas in this space for sure, but it’s only recently with the power and ease of ARKit that we can finally turn them into reality.

So later this year, we are really looking forward to bringing some of those experiences to life for you and everyone else in the App Store.

Thank you very much.

[ Applause ]

All right.

Thank you.

That was a great demo.

OK, so if you would like to get more ideas for how you can leverage AR in your own apps, I recommend that you walk over to our Hands On area that is open now and will be open until 7:00 p.m. tonight.

You’ll find many more demos there from partners we’ve been working with on AR.

And that already brings us to the end of this session.

There are a ton of new technologies and APIs here that you should get your hands on.

We are enabling entirely new types of applications and we’re providing you with building blocks for forward-looking technologies like machine learning and augmented reality.

So go take a closer look and see how we can leverage these for creating even more compelling apps.

Now, downloads for our developer previews for all our operating systems and developer tools will be available from the WWDC Attendee Portal this afternoon, so you can take a whole, get a hold of them right away and learn about all the things we announced.

And yes, of course, many sessions here at the conference that will cover many of these topics more deeply, and I also recommend that you make good use of the many labs we organize.

You can meet the Apple engineers right here on site to answer your questions.

And with that, I hope you have a great conference, and I’ll see you around this week.

[ Applause ]

Apple, Inc. AAPL
1 Infinite Loop Cupertino CA 95014 US