Platforms State of the Union 

Session 102 WWDC 2018

2018 Platforms State of the Union

[ Music ]

Ladies and gentlemen, please welcome Vice President of Software, Sebastian Marineau-Mes.

[ Applause ]

Good afternoon, everyone.

Welcome to the afternoon session of WWDC 2018.

Now, we had a really, really great session this morning.

I think you all enjoyed the keynote?

Lots of great things were presented.

And I think you saw that 2018 is a year with a strong focus on the fundamentals across our entire ecosystem where we pushed the boundaries in key technology areas.

We’re introducing numerous APIs and capabilities that enable new experiences covering a broad spectrum that ranges from machine learning, augmented reality, high performance graphics and of course, new development tools.

Now, many of the improvements in the APIs apply to all of our operating systems so they all move forward together.

And iCloud provides the fabric that enables a unified and consistent experience across all of our devices.

In iOS 12, we’ve seen a huge number of incredible new features including these great new capabilities in AR, the camera effects in Messages, multi-way FaceTime, usage data with Screen Time, richer photos and of course, a great focus on performance.

And with macOS, we’re really excited to introduce Dark Mode, New Finder on desktop features, new apps like news and stocks, a redesigned Mac App Store, and a strong focus on security and privacy, watchOS 5 brings customizable interactive notifications, support for your app content and shortcuts on the Siri Watch face, background audio mode and improved workout API.

And in tvOS, we’re adding Dolby Atmos support so that video apps can deliver immersive audio experiences.

We heard that this morning, really amazing.

Secure password sharing from iOS devices so it makes it really easy to slide into through Apple TV apps, VPP support and enhancements to UIKit and TV ML Kit to make it even easier for you to build native apps that look and feel great.

Now our great products are platforms and all of your apps truly impact the world.

And when you think of the breadth and the scale of our ecosystem, it really makes us an essential part of our users’ life.

Be it helping them explore their creativity, connecting with the people they care most about or transforming the way that healthcare is delivered, we together focus on what’s most important to our users and we deliver these great experiences.

Now we think technology is most powerful when it empowers everyone.

And so we work to make every Apple product accessible from the very start.

We provide great capabilities that make our platforms and all of your apps accessible and we want to encourage you to take, keep taking advantage of these because it’s really important to those users.

Now, our users also entrust us with their most precious data.

And so at Apple, we think deeply about privacy and security.

And I’d like to invite Katie up on stage to tell you more about this.


[ Applause ]

Thanks, Sebastian.

When we think about privacy, we think about how to build privacy into all their products and services.

And there could be a lot of details to think about.

But it’s important to think of the big picture, trust.

Now it’s up to all of us to ensure that users can protect, can trust us to protect their most sensitive data.

From financial data to communications to location and photos, trust is crucial as technology becomes more and more integrated into our lives.

So how can you build trust with your users?

We focus on four key pillars and let me show you an example of each.

Now, we don’t require users to sign into Maps but instead we use rotating random identifiers that can’t be tied to an Apple ID to enable relevant results.

We use on-device intelligence to enable powerful features like search and memories in photos without analyzing photos in the cloud.

We designed Face ID so all Face ID data is encrypted, protected by the Secure Enclave and doesn’t ever leave your device.

And when we collect users’ data or allow a third party to collect data like photos, we make sure we do so with the user’s consent.

So let’s dive a little bit deeper into transparency and control.

You have all seen these alerts when you request access to location or photos.

And this alert includes a purpose string.

Now this is what you provide in order to explain why you’re requesting data and how you will use that data.

Now a good string includes a clear explanation of what features it will enable and what functionality it will improve.

Now, the more specific you are with your users, the more likely they are to grant you access.

We think it’s critically important to understand and for users to understand how their data will be used.

So app review is paying closer attention to these purpose strings.

So if you have a purpose string like this, which you know, it’s clearly not valid, you may get dinged by app review.

Now this string technically explains how data will be used.

But it lacks detail so it’s really hard for your user to make a decision.

Now some users may have concerns about granting your app microphone access but it may be key to your app’s functionality.

So that’s why it’s important to have a clear purpose string like this one that explains exactly how you are going to use the data.

Now great features don’t have to be at the expense of privacy.

But instead can support them by making it clear to users how you’re going to protect their data and how it’s being used.

Now, we care deeply about security.

And in order to protect all the sensitive data that resides on the device in apps and in the cloud, we think about security holistically.

And we provide technologies to make it easy for you to build secure apps.

Here’s a few examples of the technologies that we provide.

On iOS, we automatically encrypt app data by default.

Over the network, App Transport Security means you never have to patch client networking libraries again.

Now in CloudKit in the cloud, CloudKit securely stores and syncs data across devices.

Letting you focus on building a great experience for your users without having to worry about managing account state or your cloud credentials.

And it enables you to take the best, advantage the best in-class security including built-in two-factor authentication.

Since its launch three years ago, more than two-thirds of Apple ID accounts have adopted two-factor authentication.

This is a huge success compared to the rest of the industry where we see less than 10% of accounts protected by two-factor authentication.

But this is very important to us.

And we work continuously to make our users’ accounts more secure so you’re the only person who can access your account even if someone else knows your password.

And new in iOS 12, we’re making using passwords more convenient and more secure for you and your users.

We all know that a secure password is critically important to keeping your information and your identity secure.

But they can be hard to remember and it’s tempting to use weak or reuse passwords.

And this creates problems for you as a developer as well.

Now, users may abandon account sign up and you have to deal with password reset requests.

But worst of all, is the potential for compromised accounts due to weak passwords.

So we have a solution iOS 12 makes it easy for you and your users to always use a strong unique password by creating, storing and AutoFilling the password.

But the really great thing is it will also work automatically in your iOS app too so they always get a strong password no matter where they create an account and it syncs to all of their devices.

Now it couldn’t be easier to offer automatic strong passwords.

In fact, you may not need to make any changes within your app.

So to ensure it just works, you need to associate your app with the domain.

You may have already done this if you have adopted universal links.

Then you need to label your user name and password fields.

And if the passwords don’t meet your app requirements, now you can even customize them.

We’ve also made it easier for your users to get to their passwords.

They can just ask Siri and once they’ve authenticated, they’re taken right to their password list.

And on top of that, to help clear up old password since, we’re making it really easy to tell if any of your passwords have been reused across your existing accounts.

Your iPhone would flag these passwords and take you right to the website where you’ll be able to replace it with a strong password.

We’re also make it easier to deal with those one-time passcodes that are texted to you and your users much more convenient.

They’ll automatically appear right in the click tap bar and you can fill them in with just a tap.

[ Applause ]

We’re also creating a new extension point for third-party password managers to enable them to supply passwords for AutoFill and apps in Safari.

[ Applause ]

Now these features work across iOS, the Mac and even Apple TV for a great experience across your Apple devices.

We care deeply about privacy and security.

And they’re foundational to all of our products.

So we provide the ability for you to build on this foundation to protect, secure and earn your users’ trust.

And now, handing it back to Sebastian.

[ Applause ]

Thank you, Katie.

Isn’t aren’t these new password features amazing?

Really, really great.

That was great.

Thank you.

Now, ultimately, we also promise our users great experiences.

And we usually think about great experiences as being great innovative features.

But equally important is not to compromise that delight with unpredictable and slow software.

This is top of mind for the Apple Engineering Team.

We develop tools and practices that help us with this.

And then we work to bring these same tools to all of you so that you can apply them through applications.

Available to you are a number of tools and techniques to help you make your code more reliable and robust.

It’s important for your app to be predictable.

And of course, making your app run fast is critical.

And for that, we have a number of performance tools at your disposal.

Now we understand that optimizing performance across complex systems and applications is challenging.

And this year, we worked a lot on this.

We’ve developed a lot of new tools and techniques and want to bring you some of these powerful new capabilities.

So in Xcode 10, we’ve extended instruments capabilities and enabled you to take it even further with your own custom tools and work flows.

Now this all starts from a legacy API.

Some of you may know this and have used it.

I know I’m guilty of it printf, it’s like the Swiss Army knife of APIs.

We use it to debug and trace through our code but we all know that it’s slow.

And so two years ago, we brought you this new API called os log.

It’s an efficient and performant API that captures logs and tracepoints across all levels of the system.

It’s fast and lightweight and if you’ve not adopted it already, you really should.

It’s great.

And our newest addition this year builds on top of os log and it’s called os signpost.

It’s a powerful technique that provides rich, contextual data for your application in a format that instruments can interpret.

So you could use signpost to trace through your code and you can also use it to bookend critical sections of your functions.

And once you have the data, the real power comes in the built-in custom instruments visualization.

Now, we have this new Custom Instruments support and the best way to convey the full power of this, I think, is through demo so Ken will show us what our tools can do.


[ Applause ]

Thank you, Sebastian.

So I’m working on my Solar System Exploration app here.

And I’ve noticed I’ve got a little bit of a performance problem.

So every time the app goes to update its data, you know, when it launches or when I press command R like that, you can see the UI, it gets really choppy.

The planets, they kind of stutter as they move around their orbits.

And then once the update completes, well, it’s pretty smooth.

So I want to figure out what’s going on here.

Now back over in my code, PlanetUpdateService.swift this is the file that handles that planetary update.

So I want to add some logs, some signposts to help me understand what’s going on in my code.

So I’m going to start by adding a log handle.

So I’m going to use the new pointsOfInterest category.

Now this is a special new category.

Anything that I log with it is automatically going to show up right inside instruments.

Now, the first thing I want to see is when we kick off this update.

And that happens in this method.

So I’m going to add my first log statement here.

I’m going to say requesting planet data so that we could see that.

And then what I really want to know is how long is it taking to process and parse all the data that I’m doing here?

So right here is where that happens.

And to help me visualize this, I’m going to add a couple of signposts.

So the first signpost is going to be a begin-type signpost here, just before I start doing the work.

Then I’m going to add another signpost right here after I finish doing the work.

That’s an end-type signpost.

So this is going to create a time interval for me, automatically calculate the delta and surface that right up through instruments.

So let’s profile this in Instruments and see what kind of data we get.

So I go to Product, select Profile.

Xcode’s going to build my app, launch Instruments and then we’ll start to see, well, we’ll start seeing data stream in here.

Now right here, you can see the pointsOfInterest track.

So everything that I was logging with the pointsOfInterest category, that shows up here so this is my data.

I want to zoom in.

So I’m going to hold on Option and click and drag so we can get a closer look.

And we can see this little flag right here that says requesting planet data.

So that’s a result of the first log I added in my code.

Then these blue bars right here, this is where I’m processing and parsing data.

So those are the results of the signpost I added.

Now, as I look at this, I think I see what the problem might be right away.

So every time I go to process data and parse it here, I can see a corresponding spike in the CPU use on the main thread.

And to me, that is a bright red flag that I’m probably parsing and processing this on the main thread.

Not a recipe for a smooth UI.

So with just a log statement, a couple of signposts, you could see I can start to get some really great insight into the performance of my app.

Now the new tools, they let you do way more than that.

So with Xcode 10, there’s a new template that lets you create a fully customized Instruments package.

Now, one of my team mates, he’s gone ahead and built one based on some network, some signposts that he added to our networking framework.

And I’ve got the latest version he sent me here in my downloads.

So let me open that up and when I do, Instruments offers to install it for me.

So I’ll say install.

And now you’ll see, I’ve got a new template.

Here are my templates.

These are called Solar Systems.

I’m going to double click that.

And then we’ll start recording data again.

Now, just like before, I have the pointsOfInterest tracked so that is on the data that I wanted to see.

But now, I’ve got much more detailed information about the networking request that I’m making here.

So again, let me zoom in so we can get a closer look.

Now, this custom Instruments package here is giving me a great visualization into how I’m using this framework.

So it’s showing me things like for example here, how many network requests am I making on average every 10th of a second.

Then down here, this track is showing me detailed information about each and every network request.

How long did it take?

It’s even highlighting duplicate requests in red.

So these are places where I’m asking for the exact same data more than once.

It looks like I’m doing that maybe even more than 50% of the time.

So I’m just leaving a ton of performance on the table and it’s exactly these kinds of insights that I need to help me use this framework more effectively.

So signposts, Custom Instruments packages, two really great new ways for you to visualize your data right in Instruments.

And that’s a look at the new performance tools.


[ Applause ]

All right.

Thank you, Ken.

That’s a really, really amazing demo.

Really great tools that all of you can use to make your apps run even faster.

Now, to recap, we just reviewed a lot of great tools and best practices that we can use to ensure that we delight our users and keep their trust.

Now, I’d like to turn our attention to the Mac.

OS X was launched 17 years ago and we’ve constantly pushed the platform forward.

64-bit support in Leopard, MacOS Mountain Lion introduced Gatekeeper, a key step forward in Mac security.

And one of our key missions is to always push the Mac forward by extending its capabilities to take advantage of the latest technologies.

But as we push the platform forward, we sometimes have to deprecate legacy functionality to ensure that we’re not holding it back.

Last year, we announced that High Sierra was the last MacOS release to fully support 32-bit apps without compromise.

And this year, we’re announcing that MacOS Mojave is the last release to support 32-bit at all.

So as we remove 32-bit support next year, these 32-bit only frameworks will also be removed such as the QuickTime framework and the Apple Java framework.

Next, let’s look at security on the Mac.

Gatekeeper has done a great job at avoiding large-scale malware attacks and this year, we want to push it even further.

We’re extending user consent, enhancing run time security and we’re launching a new Notary Service.

So let’s look at these in more detail.

As you heard this morning, we’re extending the protections afforded to sensitive system resources.

We’ve added camera and microphone and we now require user consent for API and direct access to all these resources.

What does it mean in practice?

Well, it means that your application has to gracefully handle those calls potentially blocking or failing as the user provides consent.

It’s also a really great idea, as Katie has pointed out, to provide meaningful purpose strings so when the user is faced with one of these dialogues, they understand why your app needs access.

We’re also going further in protecting sensitive user data.

And only specialized apps like backup tools require access to this kind of data.

And so we’ll protect these locations by requiring user consent directly in the security and privacy preference pane.

Next, we’re introducing enhancements to run time protections.

Now, a number of you have requested a way to extend the zip protections to your own apps.

And our new enhanced run times, there’s a new security baseline that requires risky capabilities to be opted in.

So beyond strong code validation, if for example also protects your apps from code injection.

The enhanced run time is fully backwards compatible.

It’s opt in through a simple switch in Xcode.

And finally, we’re introducing the concept of notarized apps.

This is an extension to the Developer ID program for apps that are distributed outside of the Mac App Store.

And it has two main goals.

The first is to detect malware even faster than today before it gets distributed to our users.

And second, provide a finer-grained revocation capability so that we can revoke a specific version of a compromised app as opposed to revoking the entire signing certificate.

Now here’s how it works.

You develop the bug and build your app as before.

And you sign it with your Developer ID Certificate.

But before distributing it to your users, you submit to the Developer ID Notary Service.

Once notarized, you distribute the app through your existing channel.

Once your user runs the app on their system, MacOS Mojave will check with the Notary Service to make sure the app is properly notarized and is not known to be malicious.

Now, the service is not app review.

There are no new guidelines being imposed on Developer ID apps as a result of the Notary Service.

It is used exclusively to analyze apps for security purposes.

A future version of MacOS will require all Developer ID apps to be notarized by the service before they can be installed so we want you to get ready.

It’s available in beta today.

We encourage you to try it out and give us feedback.

And those are the enhancements to Gatekeeper in MacOS Mojave.

Let’s now switch gears and talk about the MacOS user experience.

And to do that, I’d like to invite Kristen up on stage.


[ Applause ]

Thank you, Sebastian.

I’m excited to be here.

We have a lot of great features in MacOS Mojave including improvements to Finder, SnapShots and desktops docs.

I’d like to focus on one in particular that you, as developers, can take advantage of.

And that’s Quick Actions.

With Finder Quick Actions, we’ve embedded the tools you need right where you need them in the Finder preview pane.

You can perform common actions on your files without ever leaving Finder.

And there’s different actions for different file types.

As you can see here with video and here, with a PDF.

And it’s not just built-in actions.

We know pro users especially like to create their own.

And those actions are shown here in Finder as well.

New developers will be able to provide custom actions from your applications using app extensions.

And as an end-user, you can also combine shell scripts, AppleScripts and Automator Actions in Automator to create an action bundle.

And these action bundles will be shown here in Finder as well based on file type.

These Custom Actions get some prime real estate in Finder and even more so in Touch Bar.

Touch Bar is great when customized.

And you can customize Touch Bar to show these actions all the time or on the tap of a button.

Moving on, in the keynote this morning, you got a sneak peek at another technology we are really excited about.

An easy way to bring iOS apps to the Mac.

We are in the midst of developing this technology in the context of these four apps, News, Stocks, Voice Memos and Home.

These apps utilize UIKit and this is a new way to delivery great Mac apps.

Of course, AppKit is our primary native framework and it takes full advantage of all the Mac has to offer.

And in no way are we de-emphasizing that.

However, we note that a lot of you have iOS apps and you don’t have a native Mac experience.

And for these cases, we want you to we want to give you an easy way to bring your apps to the Mac as well.

So how are we doing this?

These UIKit apps are running in a native environment on top of a native stack.

And if you look closely, you’ll see that the stack below the UIKit app has a lot in common with the stack below the AppKit app.

In fact, these environments were built on a common foundation which in some cases has drifted apart over time.

So we’re taking this opportunity to rationalize the substrate which is great news for you developers independent of this technology because it makes it easier for you to write portable code.

These apps get all the typical Mac features and I’d like to show that to you now.

You’ve seen the new Stocks app for iPad.

I’m running a Mac version of this app built from the same sources.

Mouse events are mapped to UI events so I can click on a ticker symbol in the watchlist to see more information.

I can move my mouse over the interactive chart to see the price at a point in time and I can click and drag to see the price change over a period of time.

I’m going to click on an article to open it right here in app.

Now since this is a Mac window, I can resize it as I would like and I can also take it full screen.

I can navigate using two-finger scroll which is another example of event mapping.

And if I want to copy some text, I can select it, pick it up and drag it and drop it in my Notes app.

Now in this Note, I have a link to a news article so I’m going to click that to open it directly in News.

And we’ve populated the menu with items for this application.

So for example, I can go to the file menu and I can follow this channel.

And notice how ESPN appears directly in my transition sidebar.

Another Mac touch can be seen in the toolbar here where there’s a red color contribution coming from the content underneath it.

Now we have controls for the window in the toolbar including the share button so I can click on the share button to show this article with a friend.

So that’s a quick look at UIKit apps on the Mac.

[ Applause ]

Now, thank you.

We are continuing to develop this technology and we are working to fully vet it before making it available to you and your applications which we are planning to do next year.

Next, Dark Mode you’ve seen that Dark Mode is a big thing for MacOS Mojave and we think it looks stunning.

Let’s take a quick tour.

The window background is dark making the content pop.

The sidebar is translucent and the content is blended vibrantly which preserves contrast with whatever may be underneath the window.

And in a few cases, we found it valuable to change the icons slightly so you can see a slight darkening of this photo icon and a new dark trash can.

But there’s something very subtle here.

The window background is actually picking up a slight hint of color from the desktop.

To show you what I mean, here is a window on top of two very different desktop pictures.

On the left side, there’s a slight blue tint in the window from that slightly blue desktop picture.

And on the right side, there’s a slight orange tint from the predominantly orange desktop picture.

This is not translucency.

We’re actually picking up our average color from the desktop and blending it into an opaque background.

And we do this so that your window looks harmonious with a variety of desktop pictures.

Let’s look at what you need to do in your apps to support Dark Mode.

Because we want to make sure to preserve compatibility with your applications, we are not automatically opting you in.

You need to build against the MacOS Mojave STKit.

For example, this is how Keynote looked when we first ran it after building on Mojave.

It got a dark toolbar but it didn’t otherwise adopt to Dark Mode the way we wished.

The [inaudible] part is drawing too light of a background.

The toolbar controls are faint and hard to read.

The sidebar is the wrong material so it’s too translucent.

And in the selected segment in control, we have a white glyph on a white background.

The good news is these issues were all easy to fix.

We have simple API that support all the needs of Dark Mode.

And in fact, most of these have existed for years and we just had to augment them a tiny bit.

There’s NSColor.

There’s Container Views with background color properties.

There’s Visual Effect View in materials.

There’s Template Images and a new way to colorize your content.

So we updated Keynote with these APIs and this is the result.

It looks great.

These were pretty simple changes.

We invite you to try this today.

If you’re already following the best practices of using asset catalogs in system colors, you could be pleasantly surprised at how close you already are.

And since these techniques are available on previous releases, you can adopt and easily back-deploy.

It, of course, depends on how many custom controls you have in your applications but for a few of our apps, it was as little as a day of work.

We give you some useful tools for it as well.

Well, I’d like to welcome Matthew to the stage to show you how Xcode 10 supports adoption of Dark Mode and much more.

[ Applause ]

Thank you, Kristen.

Our Xcode release this year is focused on the productivity.

Work flow improvements, performance improvements and new feature support in all of our STKits.

And of course, when running on MacOS Mojave, Xcode has a whole new look and feel.

So let’s start by taking a sneak peek at how Xcode can make your Mac apps look great in Dark Mode too.

So here we are back in our solar system application.

We’ve been converting it over to Dark Mode and we’ve made great progress so far.

There’s a couple of items left I need to finish here.

There’s a darker version of this globe my designers have provided to us.

And there’s these two hard-coded boxes that have colors that I need to change.

Xcode’s asset catalogs makes this easy.

Let’s start with this image.

I’ll change over to the tab with my assets and we can see, I’ve already defined dark variance for all of my colors.

I’ll select the group with all of my images and here’s the planet image I’d like to add a dark variant for.

That’s easy.

I’ll select it.

Go in the Inspector and add a dark variant.

And my designers have sent me the assets here so I can just pull them out of my downloads folder and put them into the catalog.

That’s it.

You’ll see when I go back to my interface now, the globe is updated to match the appearance of the interface builder canvas.

Now, I’ve already specified all the color variants that I need.

So to update these boxes, I’ll just select both of them, go to the Inspector and change the fill color to one of my catalog colors.

We’ll take the badge background color.

Great, so now my interface is looking pretty good.

Now when designing interfaces, I often like to check the other appearance as I’m going along to evaluate my progress.

Interface Builder makes this easy.

Down the bottom here, there’s a new appearance bar that allows me to toggle between appearances.

I’ll just select the appearance on the left and now I’m seeing my application in the light appearance as well.

So I can easily evaluate my progress.

Let’s run our application and see how we’ve done.

We’ll update our assets and we’ll launch our application.

And we’ll see here that the application launches.

And great, it’s looking pretty good.

Now the application launched in the dark mode to match my system.

But while I’m developing, I can change the appearance.

Down here in the debug bar is a new appearance toggle that’s also in Touch Bar, and it gives me access to all the appearances.

I can select the light mode, the dark mode, even high-contrast modes to evaluate accessibility.

So I’ll select the light mode.

We’ll load those assets, and there’s my application in light mode as well.

So very simply, with asset catalogs, interface builder, and our debugging tools, it’s really easy to make your apps look great in Dark Mode, too.

[ Applause ]

Now I know many of you have wanted a dark mode appearance in Xcode for a long time.

It’s been one of our most popular requests.

In fact, just a couple weeks ago, there was a posting in the App Store about this feature.

It was from a user name Ronnie Bo Bonnie.

This is true I’m not making this up.

But I just wanted to take a moment and say Ronnie, if you are out there, no charge.

[ Laughter ]

[ Applause ]

Now we also have some other improvements to our design tools to share with you today.

Form-based UIs like preferences and inspectors are common in Mac apps.

And Cocoa’s NSGridView is the perfect system for laying them out.

So we’re bringing the power of NSGridView right into Interface Builder where you can now design your column- and row-based UIs just like working with tables in a spreadsheet.

Drag-and-drop content [applause] yes.

Yeah, you can clap for that.

Spreadsheets can be cool.

You can drag-and-drop content, use contextual actions, and you get system access to things like right-to-left layout.

Now when designing your interfaces, the library is an important tool, and we have a whole new workflow for you because the library is now separate from the inspectors.

You can now take the library and reposition it wherever you want.

you can adjust the size to match your layout.

And you can keep the library up while working or have it automatically dismiss when you are done.

[ Applause ]

And the library works great with all of our content types, including media and code snippets.

And finally, with our design tools, you’ll notice they’re just snappier, with faster document loading and more responsive canvas interactions.

Now we’ve also spent time focusing on our source editing tools, keeping them fast, fluid, and informative.

We started with performance, where you’ll now see the editor loads large documents much faster, all while keeping smooth scrolling at 60 frames a second.

Next, we double down on stability in SourceKit and enhance the robustness of our language integration.

So now more of your colorful comments will stay inside of the editor rather than being about it.

[ Applause ]

Co-completion and navigation are two essential workflows, and we’ve improved on both.

Co-completion now provides more targeted results and limits completions to high-confidence matches.

And when navigating, with jump to definition, the destination list will now provide contextual details like file and line information to help you easily get to where you need to go.

[ Applause ]

And you’ll see the same contextual information in the new callers option in the action menu, which is a seamless way to move throughout your projects.

Now last year, we introduced refactoring for all languages, including Swift.

And you, the Swift community, embraced the opportunity and added a number of new actions.

These actions all streamline common programming scenarios and are built right into Xcode now, just a click away.

Now refactoring is just one of many ways you can modify the source in your project.

And to make it easier to keep track of your changes, we’re introducing a source control change bar.

The change bar is on the left side of the editor and highlights lines of code which have changed since your last checkout.

The style and color of the indicator reveal the type of change, making it easy for you to see at a glance changes you have made, your team members have made, and those which might be in conflict.

Now this feature is yes.

[ Applause ]

I agree. I think this feature is pretty awesome.

And I’d actually like to show it to you in a demo now.

So we’re going to go back to our solar system application, and I have some changes I’d like to make in one of our source files.

It’s our scene view controller here.

So I’ll scroll down in the editor to the place I’d like to make changes.

And here we can see on the left just to the left of the line numbers the source control change bar is indicating there’s some upstream changes a team member has made.

In fact, if I had already made the changes to this line, you’d see the indicator turns red, highlighting a conflict.

If I put my cursor over the indicator, you’ll see it highlights the ranges of characters which have changed and are in conflict.

And if I click on the indicator, it brings up an action menu with both a description of the changes and some actions I can take I see my team member has added more descriptive comments here.

I think we’ll take his change, so I’ll use the action menu to discard my change, and go up under the source control menu to pull his changes in.

So here’s his changes very descriptive comments.

I can scroll to the bottom of the editor, see if there’s anything else I’d like to look at here.

Here’s another new feature of Xcode 10 our editor supports overscroll now.

[ Applause ]

So going back to the lines of code I’d like to change, I’d like to convert these hard-coded functions into properties that pull the colors from the asset catalog.

Now there are three of them that I’d like to change, and they’re a bit spread out now because of all these comments.

Well, no matter, with Xcode 10, we’ve improved code folding.

Basically, you can now code fold anything you want.

And we’ve brought back the code folding ribbon.

So just to the right of the line numbers, I can click [ Applause ]

to collapse the code away.

And we have this nice, svelte presentation of the collapsing now.

Now this is the first function I’d like to change, and I see that all of these functions are very similar, and it would be great if I could make all of these changes all at the same time.

Well, I can do that now, too, with multi-cursor editing.

[ Applause ]

The key to multi-cursor editing is two fingers, control and shift.

So I’ll hold down those two keys and just click at the beginning of each of the other functions.

We’ll use range selection, and we’ll just change that to VAR.

We’ll change those into colons.

And we’re pretty good so far.

Now I happen to know that I’ve named my colors in the catalog the same name as my properties here.

So we’ll just select those names and copy them.

And now let’s go to the implementation and change that.

So we’ll drop three more cursors, and we’ll just select all of this, type in named, paste in those colors, and we’ve made all those changes.

It’s like three times faster.

[ Applause ]

Now multi-cursor editing also works great with column selection.

So here I have all of my IBOutlets defined with week.

If I hold down the option key and I select all of these in here [cheering] oh, yeah.

Oh, yeah, so let’s just convert those into [applause] unowned.

And just like that, I can make my changes, and then use the source control bar to make sure that I got the changes I want.

So those are some of the great new editing features you’ll find in Xcode 10.

[ Applause ]

So additions like the source control change bar and multi-cursor editing alongside performance and stability improvements.

Xcode 10 continues to raise the bar on our source editing experience.

Now in addition to the source control change bar, we are also extending our source control integration.

We started first by unifying our conflict resolution system with Git, making the results more accurate, more predictable, and significantly faster.

Next, we’ve enhanced the pull action to support rebase.

So you can replay changes between branches [applause] yes, it’s okay to clap for that.

[ Applause ]

You can replay changes easily between branches without the unnecessary merge commits.

And to keep your connections secure, Xcode will help you create SSH keys and upload them directly to your service accounts.

[ Applause ]

And this is the perfect accompaniment for our service integrations, because in addition to GitHub, we’re adding two new services this year, support for Atlassian’s Bitbucket cloud and Bitbucket server [ Applause ]

and support for and self-hosting.

[ Applause ]

There’s a lot of source control love here.

And both of these work great because their web interfaces will check out directly into Xcode.

Now as Sebastian mentioned earlier, we are passionate about giving you great tools to debug and optimize your apps.

And this year, we focused on the usability and performance of our tools.

We started with LLDB, our lower-level debugger.

Which now has faster startup and more precise access to your variables in the console and Xcode’s variables view.

Next, we’ve made downloading debug symbols five-times faster.

So now it’s more like seconds rather than minutes.

[ Applause ]

We’ve enhanced our memory debugging tools to have faster loading, and saving of documents, and a new compact layout to help you visualize even more of your application at once.

And earlier this spring, we introduced energy diagnostic reports.

They’re like crash logs, but for energy usage.

These reports are automatically collected on iOS for test flight in App Store apps and surface details for foreground and background usage.

These reports show up in the organizer and include stack frames to illustrate the issue.

And just like with crash logs, you can open these reports in your project to navigate your code and find and fix the issues.

Oh, and to go alongside these, we also have some improvements in testing.

Earlier this spring, we enhanced code coverage, adding a command line tool to access coverage data, and giving you the ability to select individual targets to collect coverage for.

This means your coverage reports can now be actively focused on the areas you are coding and testing.

In addition to these, we’re adding two new testing workflows this year actually, three.

The first is that you can now automatically include or exclude new tests in your test bundles.

Next, you can randomize the order that your tests are executed in to minimize accidental dependencies.

And our biggest change for this year is you can now execute your tests in parallel inside of Xcode.

[ Applause ]

Now last year, you could use Xcodebuild to test on many devices in parallel, sending all the same tests to each device.

Now this is perfect for use with continuous integration where you want the broadest scale of testing.

When you’re working in Xcode, you’re more often focused on a single configuration.

And once you’re testing, to finish as quickly as possible.

This is the configuration Xcode 10 dramatically improves with parallel testing.

Behind the scenes, Xcode will create copies of your Mac app or clones of your iOS simulator and then fan your tests suites out to them.

This means you continue to test a single configuration, but your tests finish in a fraction of the time.

And parallel testing automatically scales to the capacity of your machine, which means on an iMac Pro, it can be pretty awesome.

How awesome, you might ask?

Well, let’s see in another demo.

So we’re going to go back to our solar system project one more time.

And here we see the testing log for our Mac tests that we ran before.

Took about 14 seconds.

Let’s run it with parallel testing now.

I’ll click and hold on the toolbar and select the test action.

And we’ll bring up the scheme sheet.

In the options, I’ll just click execute in parallel and click test, and we’re going to build our tests for parallelization.

And if you watch the doc in the lower right, you’ll see that we launch the tests, we now launch many different processes this is one for each of our test suites and collect the results.

And if we look at our testing log, it finished almost four times faster.

[ Applause ]

So where parallel testing works great for unit tests, it works awesomely for UI tests.

So I will select the iOS version of our application, and we’ll kick off testing.

So behind the scenes, we’re going to go and clone the active simulator, and then set up a number of debug sessions for each one of these, and then switch over to a space with all of those simulators running.

So you’ll see we’ll install different test suites on each of these simulators and kick off a different set of tests on each.

So I can run all of my same tests faster on all these devices, which gives me the ability to add more tests and make a much better app.

This is ludicrously awesome parallel testing in Xcode 10.

[ Applause ]

Last year, we introduced a preview of our new build system written in Swift.

Many of you tried it out with your projects and provided great feedback.

And so I’m happy to say our modern build system is now on for all projects.

In addition to greater reliability and stability, we also focused on overall build performance.

You’ll find the build system now has faster rebuilds, better task parallelization, and uses less memory.

And the build system now includes new richer diagnostics to help you tune your project configuration to achieve the best build performance.

Now staying on build performance for a second, I’d like to talk up another core component of our release, Swift 4.2.

Over the last year, we have made steady improvements to compile times with Swift projects.

We’ve sampled a number of open source iOS applications, and compared to our previous release, debug build performance with Xcode 10 is often twice as fast.

And for release builds, code size is up to 30% smaller using the new size optimization, which is a great win for cellular downloads.

Now in addition to these, Swift also adds a number of additions and runtime language improvements.

Some of these are tongue-twisting APIs like synthesized hashtable conformance.

A perfect place to try out these APIs is in Xcode Playgrounds because Xcode Playgrounds now include new [inaudible]-like interaction that allows you to evaluate new lines of code without restarting the Playground session.

[ Applause ]

Here’s a Playground of our solar system view.

And the new lines of code added to move to the next planet are evaluated and return results all while the Playground continues to run.

So all of these additions to the runtime language and tools continue Swift’s great momentum as part of Xcode 10.

And we also have another release coming up for you in the language, Swift 5.

The focus of Swift 5 is greater adoption by delivering Swift as part of the OS.

Apps will no longer need to include the Swift runtime when delivering on our newer OS releases, resulting in smaller downloads [applause] and faster launches.

[ Applause ]

We’re very excited about this, too, and we have made great progress toward this goal.

And you’ll see it in a release coming early next year.

So Xcode 10 includes a number of great productivity improvements, alongside deep investments in performance, robustness, and stability throughout our tools.

And all of this to help you do your best work now faster than ever.

And that is Xcode 10.

[ Applause ]

Next, I’d like to invite up John to tell you what’s new in Machine Learning.


[ Applause ]

Thank you, Matthew.

Machine Learning is at the foundation of our operating systems and many of our applications.

But our goal has been to provide simple and easy-to-use API to make Machine Learning accessible to everyone.

And you’ve all done a fantastic job brining so many innovative features and intelligence to your applications.

Last year we introduced Core ML with its base performance frameworks as well as Vision and Natural Language at a high level.

And I’d like to start by showing you some improvements we’re making with Vision and Natural Language.

If we take Vision and, of course, a photo that we want to have depth, we now have APIs that support object detection and bounding boxes like this sign being held in the picture.

We can do face detect, facial landmark detection.

And also, barcode like this QR code can be detected in your image.

Now in addition to the APIs we previously provided for depth, we now support people segmentation, so you can remove a person from a photo and separate them from the background or substitute in the background for something a little different.

For Natural Language, we have a brand-new, easy-to-use, Swift-focused API.

So you can take simple sentences like this one and automatically identify it as the English language.

You can tokenize the sentence and convert it into its speech parts all with simple API.

And as one other option, you can do named-entity recognition.

Here, determining that the sentence is talking about Apple as the organization and a location in San Jose.

Now you might think this is easy in languages like English, but we support many more, including French, German, Japanese, and this Simplified Chinese example.

now let’s look at Core ML.

This is our foundation of our Machine Learning technologies.

And just one year ago, we introduced Core ML here.

And since then, we’ve got adoption of every major Machine Learning training framework and format.

This is just incredible to have achieved in only one year.

But we didn’t want to stop there.

We’re introducing Core ML 2.

And we focused on making the models execute faster, those models smaller, and making it far more customizable.

And we know these are the features that were most recommended requested.

To look at performance improvements, we’ve added a new batch API.

Where previously you needed to do inference on each image, passing them between the CPU and GPU, you can now bundle those inference requests together and exploit the full performance of the CPU and GPU.

Through this technique and enhancements to the Metal Performance Shaders underneath, we now have up to 30% performance improvement on large networks like Resnet.

But if you’re using a smaller network like the kind you’re going to be using on iOS, we see up to four times improvement when running with MobileNet.

Now we didn’t stop there.

We wanted to look at making the model smaller, so we now support quantization.

So we can take a model that previously had shipped in 4.3 such as this example again from MobileNet and reduce it down to Int 8, and take its size from 17 megabytes to less than 5.

This is a huge saving for the models that you bundle with your applications.

Now you can do further reduction through features like table lookup quantization.

And we support many other features, including support for custom models now and, a very popular feature, flexible shapes.

So you no longer need to ship a model for each shape that you want to do inference on.

You ship one model, and our simple API takes care of everything for you.

now let’s talk about Create ML, our brand-new, easy-to-use machine learning training framework.

It brings together the power of Machine Learning, Swift, and Xcode, and Xcode Playgrounds.

No more downloading packages from the Internet and going through long, complicated tutorials to portray a model.

We support feature-level training such as image classification and natural language.

And if you do want to go deeper in machine learning, we support traditional types of algorithms such as linear regression and boosted trees as well as traditional data processing.

But we think people will want to use these feature type of training far more, so let’s look at those examples.

For Natural Language, you can now have your own custom Natural Language model that does text classification, word tagging, and of course, we support multiple languages.

So you could train a model with very small datasets to do sentiment analysis such as these reviews for a movie where you just train with positive and negative in strings, and you build your own custom image custom text classifier.

And then you could do the same for domain analysis, being able to train a model to understand whether you’re talking about a hotel or a restaurant in a given sentence.

Now we think, by far, image classification will be the most popular kind of training that people want to do, and so we’ve put a real focus on this.

Traditionally, if you were training a very large model with what might only be a small dataset because as a developer that’s all you have access to, your model wouldn’t train well, and it over-fed, and you get poor predictions.

Now Apple has extensive experience in training very large models with datasets of photos say in the with many millions.

And we want to bring all that experience to all of you.

And through a technique called transfer learning, you can train your own custom image classifier.

So we’ll bundle our model into our OS, so there’s no need for you to ship that.

You take your data, and use transfer learning with Create ML, and augment our model.

That means you only need to ship the part of the model that you’ve augmented, bringing a huge saving to your applications.

So we’ve worked with a number of developers who already have models in the around 100-megabyte range, just to add one intelligent feature to their application.

And now, through transfer learning, they can take that model size down to three megabytes.

[ Applause ]

Now this is far cooler to see if you see how it’s all done inside Xcode and Xcode Playgrounds, so I’d like to invite Lizzie up to give you a demo of that now.

[ Applause ]


Thank you, John.

Let’s take a look at how to create an app to classify different types of flowers.

Now I’ve started by using a state-of-the-art image classifier model called Inception B3, but there are two problems with this approach.

One, this model is quite large.

It’s taking up 100 megabytes in our app.

And the second is even though this model has support for 1000 classifications, it can’t correctly classify a rose.

Now normally what I’d have to do is switch to a new development environment, download an open source machine learning library and spend hours training a new model.

But now with the power of Create ML, you can do this in minutes and in Xcode.

Now I’ll switch to a new Playground and import Create ML UI.

The next step is to define a builder that can build image classifier models.

Then to enable drag-and-drop interaction with this model, we can show the builder in the live view.

And see, on the side we get a prompt to drag in images to begin training.

Now over on my desktop, I happen to have a bunch of different images of flowers organized into folders with the name of the particular one that they are.

So we have some daisies, hibiscuses, and of course, some roses.

Now what I’ll do is I’ll take this folder and drag it into the UI.

And instantly, an image classifier model begins training on the Mac, accelerated by the GPU.

And right away, I can see what the accuracy was on this training dataset.

But what I’d really like to know is how it performs on new types of flowers that it hasn’t seen.

And I’ve set some of those aside here, and I can just drag them in to let the model begin evaluating on these new ones.

And if I scroll, you can see what the actual label was for each type of flower and what the model predicted.

Now 95% is pretty decent on this dataset.

So what I’d like to do is add it into my app.

And you can do so just by dragging and dropping it.

I’ll then add it.

And if we take a look at this new model, we can see it’s only 50 kilobytes.

That’s a huge savings.

So I’ll go ahead and delete [ Applause ]

I’ll delete the 100-megabyte model and initialize the new image classifier.

Now if I rerun the app, it’s bundling this new model into the application.

We can go ahead and test it to see if it can correctly predict on the images that we’ve trained it on, or new images of the same types of flowers.

And indeed, it can correctly classify a rose.

Let’s try it on a hibiscus.

And it can correctly predict on those, too, since we’ve trained it and incorporated it into our app.

So as you’ve seen, we’ve been able to train our own classifier models using Create ML in a fraction of the amount of time to produce models a fraction of the size, all using Swift and Xcode.

Back over to you, John.

[ Applause ]

Thanks, Lizzie.

Isn’t that cool, a custom image classifier trained with three lines of Swift, in seconds, right there on a Mac?

So we’ve looked at new Vision and Natural Language APIs and the enhancements we’ve made there; our improvements with Core ML 2 with smaller, faster models and even more customization; and Create ML, our brand-new machine learning training framework for the Mac.

Now I’d like to talk about another area of intelligence that we’ve built into the OS, and that’s shortcuts, a powerful new way for you to expose key capabilities of your applications through Siri.

And you can even expose these key capabilities using voice commands.

Previously, sections of the OS that had suggested features and actions for Apple’s software are now accessible to you through shortcuts.

We do all this prediction on device using machine learning that preserves your users’ privacy.

So you’re probably asking how do you adopt shortcuts?

Well, many of you have already adopted NSUserActivity for features such as Spotlight search and Handoff.

And if you have and it’s as simple as adding this one line of code making them eligible to prediction for the system.

[ Applause ]

Yeah, one line of code.

But if you want the full, rich experience of shortcuts, then you want to adopt the new Siri kit Intense API.

That allows rich, inline capabilities of your application to be exposed in Siri, custom voice triggers and responses, and more importantly, more targeted predictions of when those shortcuts will be interesting to your users in the future.

Now a great shortcut is one that accelerates engagement with your application, and increases engagement, too.

It’s one that’s likely to be repeated more often.

So in the TeamSnap example you want to be able to check your kid’s soccer game schedule every Saturday morning.

And ideally, it’s one that can be engaged right there in the Siri UI and handled without the need to punch out to your app.

But you do have the option if that’s something that you want to do.

Now when creating a shortcut, you need to do three simple things.

You obviously need to define the shortcut and do it for those actions that really are interesting to the users.

You need to donate when those shortcuts occur, even if that’s in your application, because we need that signal to be able to predict those shortcuts in the future.

And of course, you want to handle those shortcuts when they occur.

Now if you’ve done all this, you get something pretty cool in that you can interact with your shortcut directly from home pod.

So now without picking up your phone, you can just ask Siri from your home pod for your kid’s soccer roster, and it will respond using the app.

Now if you also want your shortcuts to be exposed through the Siri Watch Face, you can just adopt this new Relevant API.

So that’s shortcuts, a powerful new way to expose key capabilities of your application and increase engagement through Siri.

Now I’d like to hand it over to Jeremy to talk to you about what’s new in Metal.


[ Applause ]

Thanks, John.

So Metal is Apple’s modern, high-performance, high-efficiency programming interface to the awesome power of the GPU at the heart of each of Apple’s platforms.

It accelerates both advanced 3D graphics and general purpose data parallel computations.

And since we introduced Metal in 2014, we’ve seen it used for everything from smooth, high-performance UI to modern 3D games, advanced computational photography, and the latest in AR and VR experiences.

And when we introduced our latest iPhones last fall, we were incredibly excited to reveal the next chapter in the Metal story with the A11 Bionic chip where Apple harnessed many years of deep expertise in hardware and software design to bring the Apple-designed GPU, optimized for Metal 2 with such innovative new features as tile shading and image blocks, and advancing the state of the art of GPU programming with both faster performance and lower power.

Now your applications can use Metal directly for 3D graphics and GPU Compute.

And Metal powers many of Apple’s system frameworks for graphics, media, and data processing.

Let me give you just one example.

Our iOS camera framework uses Metal to calculate depth information, identify people in photos, and generate this depth-of-field effect in this gorgeous portrait image.

And developers like Epic Games are using Metal’s broad support across all of our platforms to bring their smash-hit game Fortnight to iPhone, iPad, and Mac.

AMB’s metal-accelerated radion [assumed spelling] pro-ender plugins, are now driving high performance, 3D contact creation and professional editing in Maxon Cinema4D and Autodesk Maya.

And apps like Gravity Sketch are using Metal to power the next generations of artists in immersive professional VR editing.

Metal’s machine learning acceleration empowers iOS apps like BeCasso to transform your photos into beautiful paintings.

And drives automatic, intelligent image editing in Pixelmator Pro for macOS.

And those are just a few examples as the developer adoption of metal has been truly astounding, with more than 400,000 apps now using the Metal API.

And all systems running an iOS 12 and macOS Mojave support Metal, which includes all iOS devices and all Macs released in at least the last five years, which means there are now well over 1 billion Metal systems for your applications and games.

So with Metal’s deep and broad support across all of Apple’s desktop and mobile platforms, we are now deprecating the legacy OpenGL and OpenCL GPU framework, starting in macOS Mojave, iOS 12, and tvOS 12.

Now apps using these legacy APIs will still continue to work in these releases, but deprecation is a first step as we wind down legacy technologies.

So if you’ve not already done so, you should begin transitioning your apps to Metal.

And we’ll communicate more details about this transition in the near future.

Now as you bring your apps to metal, we are here to help.

The Metal API is dramatically easier to use and much more approachable than these other GPU programming APIs.

It contains a familiar, yet powerful, C++ GPU shading language.

And we provide a full suite of advanced debugging and performance profiling tools for using Metal, all built right into Xcode.

We have GPU performance counters with advanced profiling to identify your most expensive lines of shader code and a visual API debugger for navigating your Metal function calls, a Metal System Trace to put your Metal commands in the context of everything else happening on the system.

And we’re really excited to announce two new powerful tools this year, a new Metal Dependency Viewer where you can investigate your complex, multipass rendering and command encoders, and an all-new, interactive GPU source code shader debugger where you can actually explore your Metal code right down to the pixel level.

Now you have to see these new tools in action, so I’d like to invite Seth to give you a demonstration.


[ Applause ]

Thank you, John.

Xcode’s GP debugger is the tool for developing your Metal applications.

In the Debug Navigator on the left, you can see all the Metal API codes and [inaudible] codes used in your frame.

And on the right, you can see the results of the selected [inaudible].

The main editor shows you the all the buffers, textures, and other resources that were used for that [inaudible].

Well, new in Xcode 10, we’re introducing the Dependency Viewer, a powerful way to understand how complex render passes combine to form your scene.

This gives you a blueprint of your frame in explaining and understanding how the complex render graphs at one application such as Unity’s breathtaking “Book of the Dead” demo shown here.

I can zoom out to see more detail.

Earlier render passes are shown at the top, with the later render passes shown at the bottom.

The lines indicate the dependencies between passes, with those for the selected pass highlighted in blue.

As you can see, with more than 100 render passes, there’s clearly a lot going on in this scene.

Now as good as this scene looks, there’s always room for more flair.

So I did an additional render pass, the lens flare.

But as you can see, something didn’t [inaudible] quite right far, far to green.

Well, let’s zoom in, select a pixel, and launch the new Shader Debugger, a powerful interactive tool to let you visually debug shaders [inaudible].

In the main editor, I can see my source code.

And in the sidebar to its right, I can see variables touched by each line of code.

Additionally, I can expand any of these to see more details in line.

These two views visualize the area around the selected pixel, corresponding to the highlighted region in the frame attachment.

The view on the left visualizes the variable value.

And the one on the right, the pixel the execution mask.

This indicates which pixels executed this line of code.

This is an incredibly powerful way to debug the massively [inaudible] execution of shaders on the GPU.

Now you can see here that the shape of the execution mask matches that of the visual aberration, telling me that the issue exists on this line of code.

Well, now that I know where the issue is, I can see what I’ve done wrong, using the vector length of the lens flare rather than the color of the lens flare.

That will be easy to fix.

I can now hit the update shaders button to quickly apply the fix, recompiling the shader and deploying it to the GPU.

And here we can see that my lens flare is fixed, and the scene looks cool.

[ Applause ]

So that’s the new Dependency Viewer and GP Shader Debugger in Xcode 10, giving you powerful new tools to build your Metal applications.


All right, [applause] thank you, Seth.

So in addition to these amazing new tools, we’re continuing to advance Metal with a fantastic set of new features in iOS 12 and macOS Mojave.

Now I’m going to highlight just three of them today GPU-driven command encoding, machine learning training acceleration, and ray tracing.

So first, GPU-driven command encoding.

Now historically, your app would encode its GPU commands using the CPU and then subsequently execute those commands on the GPU.

And while Metal enabled this encoding to be very fast, it could still become bottlenecked by the synchronization between the CPU and the GPU.

Well, now in iOS 12 and macOS Mojave, you can actually encode those commands right on the GPU itself, freeing up precious CPU time for other use by your games and apps.

And because you issue these commands right on the GPU using a compute shader, you can actually officially construct massive numbers of commands in parallel as well, unlocking completely new levels of rendering performance and sophistication.

Next, I’d like to share the latest advances in Metal’s support for machine learning.

In iOS 12 and macOS Mojave, we have augmented our existing library of Metal performance shaders with an enormous array of all-new compute kernels, optimized to support machine learning training right on the local GPU on your iOS and Mac devices.

And the performance improvements we are seeing from these new Metal performance shaders on training are truly stunning, with an order of magnitude faster training times.

We’re also really excited to announce we’ve been working with Google to bring Metal acceleration to TensorFlow later this year, and the early performance results are showing an astonishing improvement of 20 times the previous implementation.

[Applause] Yeah, it’s awesome.

[ Applause ]

And last, ray tracing.

Now this is a time-honored technique to achieve incredibly realistic scenes, often used for high-end rendering and 3D product design.

However, it traditionally had to be done offline because it was so computationally expensive.

Now let me describe why very quickly.

First, you would need to mathematically model the rays from a light source as they bounce off of objects through the scene, toward the screen, and into your eye.

And to achieve higher and higher resolutions, you would need to add more, and more, and more rays until you could reach the desired resolution.

And this simple 1k-by-1k image would take nearly 6 million rays to generate.

Now each of those rays also must be processed with at least two sets of expensive mathematical calculations.

First, you had to determine if a given ray intersects a particular triangle in your scene.

And second, you must apply a material-specific shader necessary to generate the pixel.

Now originally, both of these operations would have been performed by the CPU.

But while the GPU can easily handle the pixel shading, the ray-triangle intersection itself could remain an expensive CPU bottleneck, and it would be incredibly difficult to move this to the GPU efficiently.

But the new Metal Ray-Triangle Intersector solves this problem for you.

And with this new API, you get a dramatic increase in performance of up to 10x in a very simple-to-use package, all pre-optimized for use with our iOS and macOS GPUs.

And it really is that simple, just a few lines of code.

And the ray tracing, like many GPU compute operations, is exactly the kind of operation that can efficiently scale with the available GPU horsepower.

So we can actually get even more performance by using Metal 2 support for external GPUs.

Now you really have to see this in action.

And I’d like to invite Rav to give a quick demonstration.


[ Applause ]

Thank you, Jeremy.

All right, let’s bring up this ray trace rendering of the Amazon Lumberyard Bistro scene using the CPU to perform the intersection calculations.

And this implementation is optimized to run on all 10 cores in our iMac Pro.

We’ve also added a little benchmark mode that times how long it takes to do 80 iterations of our ray-tracing algorithm.

And for context, that requires performing over 6 billion intersection tests.

And as you can see, we need about 12 seconds to do that on the CPU.

So let’s compare that to using the new ray the new Metal Ray-Triangle Intersector on the built-in GPU in the iMac Pro.

And you can immediately see that it’s much faster, and we only need about 1.3 seconds to do the same amount of work.

It’s so good, I’m going to do it again.

Here we go.

And it’s done.

So getting an almost 10-times performance increase is fantastic.

But of course, we didn’t just stop there.

As Jeremy noted, ray tracing is well-suited for parallelization across multiple GPUs, so I can enable an external GPU that I previously attached to this iMac Pro and get the render time cut in half.

So you’ll note the green line that we’ve added to help visualize how we’re splitting this workload across the two GPUs, with each GPU rendering half the frame in this case.

So this is a great improvement, but as Jeremy says, you can never have too many GPUs.

So let’s add another two for a total of four GPUs now rendering the scene.

So that’s over 40 teraflops of compute capability with our iMac Pro, and we’re rendering the scene 30 times faster than the CPU.

We think that’s pretty amazing, yep.

[ Applause ]

And since ray tracing is so great for rendering shadows, I’m just going to turn off a couple lights here to get them to pop.

And you can really appreciate how much faster the image converges on the GPUs.

So the new Metal Ray-Triangle Intersector and external GPU support on macOS we believe is going to enable some great new workflows on apps that are taking advantage of ray tracing techniques.

Thank you.

Back to you, Jeremy [applause].

All right, that is really stunning.

Thanks, Rav.

So that’s Metal 2 in iOS 12 and macOS Mojave, an easy-to-use, unified 3D graphics and GPU compute API with broad support across all of Apple’s products, including the A11 Bionic and the Apple-designed GPU.

GPU developer tools integrated right into Xcode and all-new features to support the latest advancements in machine learning training and ray tracing.

There’s never been a better time to move your app to Metal, and we can’t wait to see what you’ll create next.

Thank you.

And now, I would like to hand it over to Mike Rockwell to talk about what’s the latest news in AR?


[ Applause ]

Thanks, Jeremy.

So last year has been an amazing year for AR at Apple.

With the debut of ARKit at last WWDC, iOS became the world’s largest AR platform by a lot.

There are hundreds of millions of AR-enabled iOS devices, and that number is growing rapidly.

As Craig showed you this morning, with iOS 12, we’re taking things further by making AR ubiquitous across the operating system.

We can now experience AR content via the new QuickLook Viewer in Messages, News, Safari, and more.

To do that, we had to work on and create a file format that we optimized for AR.

And we worked with Pixar and Adobe to create a new mobile AR format called USDZ.

It’s based on the universal scene description format that’s used across the industry for professional content creation.

It’s optimized for mobile devices, and it supports Rich 3D assets and animation.

It’s incredibly easy to use USDZ.

On the web, it just takes a couple of lines of HTML, and it’s also natively supported in SceneKit using Model I/O, so you can easily use it in your applications.

We’ve also been working closely with industry leaders in content creation tools to provide native support for USDZ.

And as you heard this morning, Abhay said this morning that he had a sneak peek for you about what they’re doing at Adobe.

So I’d like to invite him to the stage to give that to you right now.


[ Applause ]

Thanks, Mike.

It’s great to be back onstage.

So as you heard in this morning’s keynote, Adobe’s Creative Cloud and ARKit will be able to reimagine and blend the digital and the physical worlds.

Now this will require a complete reimagination of new design interaction models.

So earlier today, we announced a new system for creating AR experiences called Project Aero that infused ARKit with the power of familiar Creative Cloud applications like Photoshop and Dimension.

So in fact, for the first time, with Creative Cloud and iOS, you will now have a what-you-see-is-what-you-get editing in AR.

So as you think about this and we looked at it ARKit is absolutely the leading platform for AR.

And so we’re really excited to partner closely with Apple as we go jointly explore and push the boundaries of immersive design.

But to fully realize the potential of AR, you really have to work across the entire ecosystem.

And so today, we are also announcing that Adobe will natively support USDZ format, along with Apple and Pixar [applause].

Now AR is a unique medium in that it allows interactive content to go extend well beyond the screen, where physical spaces around us literally become a creative canvas.

So let’s take a look.

[ Music ]

[ Applause ]

That’s pretty cool.

So at its core, Project Aero is part of Adobe’s vision and mission to truly democratize creation of immersive content.

As you hopefully saw in that video, creators and developers will be able to collaborate seamlessly to deliver a wide range of AR experiences using these tools.

Stay tuned for more updates on Project Aero at our upcoming conference, AdobeMax.

Personally, I couldn’t be more excited about our partnership with Apple, as we go together jointly explore the limits of this emerging and powerful new storytelling medium.

Thank you.

Back to you, Mike.

[ Applause ]

Thanks, Abhay.

Isn’t that awesome?

Amazing stuff.

Of course, the foundation of AR at Apple is ARKit.

With robust device position localization, accurate lighting and size estimation, ARKit has made it easy to create AR applications.

The iPhone X has provided groundbreaking face tracking that used to require custom hardware.

After the initial release, we quickly followed up with ARKit 1.5, adding 2D image triggers, a high-resolution background camera, and the ability to suspend and resume tracking so that you don’t have to restart an AR session if you get a phone call.

Well, I’m incredibly excited to tell you about our next big jump forward, ARKit 2.

ARKit 2 delivers a big set of advances, including improved face tracking, with a new ability to track your gaze and tongue.

These highly-requested features allow you to take facial animation to a new level of realism.

Turns out that the first thing kids do when they play with animojis is stick their tongue out.

And I think a lot of you do, too.

That’s why we had to put that in there.

To more accurately integrate objects into a scene, we’ve added environment texturing.

ARKit creates textures based on what the camera sees in the real world notice that the globe is reflecting the real picture on the table below.

But what about what the camera can’t see?

While using machine learning, we trained a neural network on thousands of typical environments.

And this enables ARKit to hallucinate the rest of the scene.

This means that you’ll get plausible reflections of things like overhead lighting you can see that in the globe even though it’s never seen the lighting in the environment at all.

We’ve extended the 2D image detection to provide support for those tracking those images in three dimensions.

So you can now have 3D objects that stick to images in the real world when they’re moved around and not only in 2D, but also in 3D.

[ Applause ]

ARKit can now detect 3D objects.

You can scan objects via an API, or a simple developer tool we provide, and then later, these maps can be used to recognize those objects and their locations and trigger a contextually-relevant AR experience.

[ Applause ]

An incredibly important feature of ARKit 2 is support for persistent experiences.

You can see here in the video that we’ve mapped an environment and then placed a 3D object.

This map can be saved and then later used to recognize the space and relocalize to that same coordinate system and not only on that device.

You can share these maps to other devices to allow them to have the exact same experience.

This makes it possible to create apps that provide persistent experiences you can go back to again and again.

You could, for example, have an augmented reality pinboard in your home with pictures and artwork.

And you can share these maps without having to go to the cloud.

These can be done peer-to-peer locally on your devices.

One other thing that we’ve done is we’ve allowed you to have the ability to share these maps in real time.

And this lets you create multiplayer AR games.

So to experiment with this, we created a new game called SwiftShot.

And I’ll show you the video that of it that we did.

[ Music ]

So SwiftShot is a blast to play, and we actually have it here at the show.

If you haven’t had a chance to go by, we have an AR game area.

We wanted to share it with you, so we’ve actually made the full source code available for you to download under an open license.

You can play with it and modify it as you like.

We can’t wait to see the creative things you’ll do with SwiftShot.

So that is ARKit 2, improved face tracking, environment texturing, image detection and tracking, 3D object detection, and persistent experiences as well as multi-user experiences.

That combined with USDZ across the operating system makes iOS 12 by far the most powerful platform for AR.

And we’re really excited to be giving that to you today, and can’t wait to see what you’ll do with it.

So with that, I’ll hand it back to Sebastian.

Thank you.

[ Applause ]

Thank you, Mike.

Wow, I think we’ve seen a ton of exciting new technologies today, and I hope you’re really, really, really excited about this.

We make it easy to leverage machine learning, build great new experiences with ARKit, high-performance graphics with Metal, a huge step forward on the Mac with Dark Mode.

I know you all love this.

And it’s all backed by great advances in our development tools that are really critical to make the most of these super-powerful technologies.

And we’ve also covered how we, together, can focus on what’s most important to our users.

All these great technologies and tools are available today as a developer preview from the WWDC attendee portal.

Who here has started downloading them?

Few people?

Okay, you’ve got to hurry up.

Distribution is limited.

Please make sure to download it right away.

And also, make the most of your week.

There are more than 100 sessions here at the conference that go deep in all of these topics.

Really, really great sessions.

We also recommend that you make good use of all of the labs that we have, because you can get help from the many Apple engineers that are here onsite to answer all of your questions.

So with that, I hope you have a great conference, and I’m looking forward to seeing you around this week.

Thank you.

[ Applause ]

Apple, Inc. AAPL
1 Infinite Loop Cupertino CA 95014 US