What’s New in Cocoa Touch 

Session 205 WWDC 2016

The UIKit framework provides the crucial infrastructure needed to construct and manage iOS apps, and UIKit continues to advance the state of the art for app design with many new features. Dive straight into new features across the iOS frameworks that allow your apps to take advantage of many new system services, as well as to offer services to other apps. This is your first stop to discover the many sessions to see about harnessing the power of iOS 10.

[ Music ]

Good afternoon.

[ Applause ]


My name is Olivier Gutknecht and I manage the UIKit team and today we’re going to talk about what’s new in UIKit and other system frameworks in iOS X.

But first, I’d like to talk about, what is not new, in Cocoa Touch.

Why, since last WWDC, we introduced a number of very important products.

The iPhone 6S with 3D Touch.

The iPad Pro 12.9 inch and 9.7 inch.

The Apple Pencil, and the Smart Keyboard.

And, with these products there is a number of key technologies you can adopt to make your applications really shine.

The first thing that I want to talk about is adaptivity.

When we introduced adaptivity 2 years ago one thing that we introduced was the concept of a size class.

The iPhone was a compact size class.

The iPad is regular.

And now we have the 12.9 inch iPad Pro, do we need gigantic size class?

We don’t. Because now we have all the tools in the framework to express that.

We have the trait system which is how your application will understand the context and environment of your application.

Size classes, that’s one very important trait but then, you have everything you need to be able to build a size based layout on these devices.

We added new support for size classes in Interface Builder and, in the framework itself, from auto layout to asset catalogs with dynamic type, with layout guide, and even UI appearance, everything is ready and fully integrated.

We’re going to have also, two sessions this week making app adaptive.

The first one’s going to focus fundamentals and the tools.

And the second one, will be about the API and the techniques you can use to make your application adaptive.

And this morning there was an amazing, session focused on design, you should watch.

The second thing that I’d like to talk about is touch input.

Last year, with the iPad Air 2 and the iPad Pro now, we have 120 Hz touch scanning.

We are scanning faster than the screen refresh rate.

And now, with Apple Pencil we have additional properties on touches so you can get orientation precise location, force.

With iPhone 6S, you have the same force information with 3D Touch and we shift in our iOS 9, in our iOS 9.1.

The APIs you need to build amazing UIs with these devices.

And Dominique Wagner is going to present how you can leverage touch input on iOS.

The last thing that I’d like to mention is the Smart Keyboard.

We have an API for you, so you can express keyboard shortcuts in your apps, I think you have more users than ever interacting with your application with a keyboard.

So you can have dynamic in context shortcuts.

The API for that is reasonable.

It’s actually, an API that is new in our iOS 7.

And it’s important now because since iOS 9, we can display these beautiful automatic shortcuts when you hold the command key.

Now, let’s talk about what is new in Cocoa Touch.

I’d like to cover four things today.

One, the core technologies you probably already use in your applications.

And, we’re going to discuss how, with UIKit and other API, you can build better UIs for your applications.

We’re going to show how the new system features in iOS X can make your application better.

And, we’re going to conclude with, how your application itself can extend the system with all new extension points.

So let’s talk about these core technologies.

Swift 3 is probably the first thing here.

Because with Swift 3, every single API we have is new again.

Because we change the rules, we change how we translate from Objective-C to Swift.

And we are doing that, to make your code better so it can feel native really, when writing that in Swift.

So for instance, something like preferred font, we are now using the first argument to make the intent clear.

We also try to not repeat ourselves, UI Color, black color now is just white or black.

One thing that I really like with Swift 3 is how APIs and core graphics now really feel great when writing UIKit application.

And, even something like, grand central dispatch, is now a full object API.

Let’s talk about grand central dispatch.

There is a feature that I really like, something very common with the dispatch is the create your own private queue.

And when queue work item to be executed as synchronously and, a new thing in iOS X, is you can actually set the queue to automatically wrap every work item in an autorelease pool.

It’s real easy to do, you just create a dispatch queue, and you pass the autorelease work item.

We have a full session on GCD, and conquering programming.

Foundation, foundation also adapted a lot of Swift enhancement, we dropped the MS prefix from many classes and we have a major new feature, we are adding value types to the foundation API.

It’s not just that.

We also have full support for units and measurements in foundation.

Another thing is a new date formatter for the AP 601 format.

If you know what AP 601 is, you’re probably very happy about that.

[ Applause ]

And another nice feature is how we’re making date computation better with new NS date interval.

We have two sessions on foundation, What’s New in Foundation and Swift, and Measurements and Units.

Yesterday, we announced support for universal keyboard, copy on your Mac and paste on your iOS device.

So how do you use that in your application?

You just use the existing UI Pasteboard API, that’s it.

There is one thing you need to know.

You might have to fetch a very large asset on your device, if I do a copy of a very big image, it might take a second or two.

So you might see this UI.

So to avoid, problem when retrieving remote data, you should check first, if you need that data or not.

So you can avoid this UI.

We added four new methods to UI Pasteboard for that so you can easily check for screen, URLs, images and colors.

Another thing in UI Pasteboard is that you can now control what you publish on the pasteboard.

So now you can restrict your data, on the local device or set an expiration date.

[ Applause ]

Now I’d like to talk about color.

It’s not a software feature, it’s really a hardware feature that we have now in the new iMac 5K, and the iPad Pro 9.7 inch.

It’s a technology shift, you can now represent colors on screen that was not physically possible before.

And, we are changing how we are going to represent colors.

The iOS devices were using the sRGB color space.

Now with this device, you’re going to work in that extended sRGB color space.

So what does that mean for you?

So first, iOS is actually already color managed.

iOS and [inaudible] and in iOS X, we are exposing the API to work with a wide color range.

The first way is to just use UI Image View.

UI Image View is actually already color managed since iOS 9.3, so you’re ready there.

What about UI Color?

We made that very simple.

We have one existing initializer, in it with RGB.

Second one, and new one, in it with display P3 RGB.

The first one is how you can represent an arbitrary color in the extended sRGB space.

So now you can express a color that goes beyond the 0 to 1 range, if you want to go beyond classic sRGB.

The second one displayP3, displayP3 is actually a color display that is quite common on the creation side.

So if you receive a spec with a wide color, it’s very likely that you’re going to use displayP3 which is why we made that, initializer available.

And that’s it.

We don’t have a color space class.

You can just use these two initiliazers and you’re done.

Now, I’d like to talk about image rendering.

Image rendering is something that you’re probably already using today with the UI graphics begin image context, add image from text APIs.

Kind of thing that you do in direct on when you want to, render an image off screen.

So there is one problem with this API.

It’s actually 32 bits sRGB only, that’s a built in assumption.

Also, that’s not our best API, it was there before the blocks and it’s not really extensible.

So, something that you’re writing with, the graphics, begin image context, API, would look like that.

And I don’t know how many of you already made the mistake of trying to get the image after the begin image context.

I’m definitely one.

But now, we have a new class, the new UI graphics render class.

And what this is giving you is, first, it’s fully color managed.

It’s going to do the right thing by default.

If you’re on a 9.7 inch iPad, you’re going to get a wide color context.

If you’re not, you will get the classic context.

The other thing, it’s block based, easier to use.

And, it’s an object based API so we have to increase our classes for images and PDFs.

Also, and that’s quite important, that class will manage the lifetime of your context.

Which means that we can do some memory optimization under the hood.

Let me give you an example.

For that the equivalent of the one before, you just create your renderer.

You try to generate an image, it’s just a matter of passing the block, and having your drawing code here.

And the thing that’s really cool about that is, if you call again, the image method here are going to just be used the context again.

A new feature related to images is asset catalogs.

With asset catalogs, we have support for wide color assets, right to left and left to right assets, and compression.

And of course they’re fully integrated with the trait system in UIKit.

So, let’s see the first one.

What color asset?

So what do we want to do here?

If you’re embedding, a wide color asset, in your application, you want to be able to deploy on a non-wide color device.

So what we are going to do is, we’re going to do a generation of sRGB variant automatically for you.

And that’s going to be fully compatible with App Thinning, so we’re going to deploy only what is needed on the right device.

The other new feature in asset catalog is compression.

We added support for a lossy compression scheme.

And, it’s going to automatically pick, if you choose so, it’s going to automatically generate the right variants for a given device.

So you will a great balance between footprint and quality.

And again, that’s completely compatible with App Thinning so we only deploy what is needed on a given device.

The last feature I want to describe here for asset catalog is the direction or image asset support.

It means that now, directly at the asset catalog level, you can specify that an image should be an RTL asset or a left to right asset.

Or something that should be automatically flipped, directly in the asset management system.

We also have a full session on international user interfaces, where you can learn more about the improvements in iOS X.

Now, I’d like to switch to how you can build better user interfaces and interactions in iOS X.

And the first thing is, you need to make sure that everybody can interact with your application.

We have the new accessibility inspector and one thing you can do with that is, it’s really great, you can connect to an app and automatically, immediately know the accessibility properties of a given UI element.

There is another really nice feature.

You can also run an audit on the application running in the simulator or your device.

Just so you can have a first hint about the kind of problems you might want to fix, probably you really want to fix, with your application.

One thing though, it’s just like Xcode if you have 0 warnings that doesn’t mean that your application is absolutely perfect.

But, that’s a great first step.

We also have a new way to interact with your applications in iOS X.

We added a new gesture, speech recognizer, [ Foreign Language ]

And not just for English.

And it’s a very simple API.

The SFSpeech recognizer, is something that can do continuous speech recognition.

And, you can connect it to an audio file or a live audio stream.

So you can do real time, speech recognition.

You can configure it to have a recognition that is optimized for dictation, or for very simple strings.

Just want to give you an example of this API.

You create a recognizer, you configure it with a request, you start a task, and we’re going to call you back with results.

Really easy to use.

We have a video in line in the WWDC app about this API.

Another great improvement in iOS X is text input.

One thing you can do now is to add information about the kind of content you have in your text field.

So, with that we can provide intelligent suggestions in quick type.

So for instance if my text field is about location, I’m going to see an address in quick type.

We have many predefined content types, for people, you can specify first name, last name, full name, or locations, and in that case that was configured as a full street address.

Or even things like email, telephone, credit card numbers.

Another thing about text input and text is how you can render text in iOS.

In iOS 7 we added support for dynamic type and in iOS X, we are making the content size category for dynamic type, a trait.

So what does that mean?

Before iOS X the content size category was just a property on UI application.

And, you actually sent the notifications.

If the user was changing, the content size, you had to react this notification and update your UI.

Now, because it’s integrated as a trait, it’s available on view controllers which mean that you can use dynamic type in all UI extensions.

And we actually also added support for dynamic type in existing elements in new UIKit.

So if you build your UI with text field, text view, labels, and you want to react, to the size change, now in iOS X you can use new adjust font for content size category and your label and text view, text field going to just react to that.

And your application, if you’re using audio app for instance, will update automatically.

So I think that’s a really great feature for dynamic type, full automatic support, for labels, and text fields.

[ Applause ]

We also wanted to add new customization in UIKit.

A good example is the tab bar items, now you can customize the batch colors and the text attributes in tab bar items, and you can also customize the unselected theme color.

So let’s say that I have a standard tab bar that is going to look like that.

If I add this code, to my application, that’s going to be my new tab bar UI.

Which is probably also why I’m not a designer.

[ Applause ]

Another great customization, in web views.

When we ship Peek and Pop for 3D Touch iPhones, we had a very simple API in UIKit for view controllers.

And we had built in support, WK web view.

And now in iOS X, you have fine control of Peek and Pop behaviors in the web view, which means that you can use custom view controllers when doing a Peek.

And you can have preview actions, and a great consequence of that is, when you Pop in a web view, you can decide to stay inside your application.

It’s available as new delegate methods on the web view class.

But, the thing that I’m really excited about for Peek and Pop, is the new UI preview interaction class.

So, Peek and Pop in UIKit, is really two things.

There is a look, this blur effect, this nice animation when you want to preview something.

And there is a feel, how do we react, to pressure, how do we detect the intent of the user, is that attack, a scroll, the beginning of a Peek?

With UI preview interaction, what you can do is to use a UIKit provided implementation for the feel, but have your own animation for Peek and Pop.

And, that’s just a new delegate on that class, so you can know when you move from Peek to Pop, and we are going to give you the progress, the progression progress so you can plug your own interaction, your own animation.

You can do completely custom Peek and Pop UI we think that’s a really big deal.

[ Applause ]

If you want to learn everything about Peek and Pop and 3D Touch, we have a full session here on Thursday at 4 p.m.

Another new thing is something that we added in scroll views that you asked for.

And it’s actually full support for the UI refresh control.

[ Applause ]

It’s available on UI scroll view, and scroll view of classes.

So we already had support for table view and refresh control, so it mean that now you can use refresh control in correction views.

Let’s talk about collection view.

That’s a great feature in UIKit.

And last year we added support for automatic self-sizing cells in flow layout.

What you had to do before is, you had to compute that estimated size.

And sometimes it’s not easy but, at the UIKit level we know what cells are going on screen.

So we have a new mode in flow layout, so you can pass automatic and we’re going to do that estimation for you, you don’t need to try to do that estimation anymore.

[ Applause ]

Do you know that we have reordering support in UI correction view?

We do. And we are improving that by adding paging support in collection view.

That’s something that is exposed at the scroll view level, and correction view will now do the right thing for you.

But, one feature this year, that I’m really excited about for collection view is, smooth scrolling.

It’s a real simple, and smart idea.

So, if I have a grid, in a collection view, and I’m going to scroll that grid.

At some point, I’m going to need to bring three cells on screen at the same time.

And if your cells are a little bit complex to layout, it might take some time and you might rob a frame.

That’s not great.

So what we are doing now, is something that we call cell prefetching.

The idea here is before we actually need a cell on screen, we’re going to ask you for that cell, and we are not going to ask for a cell as a block of cell like an entire row.

We’re going to ask for one cell, then another and another.

So we’re going to, amortize the cost for the cell set up in your application.

A great thing about that it’s automatic.

If you rebuild your application for iOS X, you get that behavior completely for free.

[ Applause ]

But there is more.

We have a new protocol, we call that data prefetching.

Because sometimes, it’s not just about doing the layout for your cell.

Sometimes you need to fetch data.

You need to read from the disc.

You need to fetch data from the network so it would be great if we could tell you hey, we are going to need that index fast.

So you can prepare, in advance.

That’s exactly what this new protocol does.

You implement a few delegate methods, we’re going to tell you when we, are about to ask for a cell so you can have the data ready.

So that’s going to improve, again, the performance of your collection view based application.

And, we made also that available on table views.

[ Applause ]

We have an entire session about new UI collection view, cell prefetching, data prefetching, reordering, don’t miss this one.

But I think the major new feature in UIKit this year, is something that we call the UI View Property Animator, It’s a new animation API.

It’s a new animation API that’s going to let you build interruptible animation.

Animation you can scrub, animation you can reverse.

We are exposing a new set of timing curves.

And it’s fully dynamic.

You can add animation on the fly.

And the way, I describe that new API is, just like a timeline in a movie.

Something that you can scrub, post, play, reverse.

The model is really simple, it’s really easy to use.

You just have to create a new animator.

You set the timing parameters and the duration, you add your animation.

And you start the animation, and because you have an object that represent that animation, that’s how you can post that or change its progress.

It’s also fully integrated in the view controller operation system and we’re going to explain everything about that, in Advances in UI Animation and Techniques this Wednesday.

But I’d like to show you, what you can build with that.

So, this is a very simple photo app, that’s something that you could do before, have customization transition in a navigation controller.

But with this animation API, I can start the transition, catch that mid flight with a gesture, move it, release it, and catch that again.

I’m going to reverse my transition, and let the animation finish.

And what’s amazing about that, is that it’s just one transition.

So we are moving the same context from non-interactive to interactive gesture driven to non-interactive again, to gesture driven to interactive to non-interactive.

You can fully blend gestured based interactions and animation.

We think that’s a great feature, we cannot wait to see what you’re going to build with that.

[ Applause ]

Now, I’d like to talk about some new features we have in iOS X.

And the key thing here is, there’s nothing that you absolutely need to adopt.

No, it’s quite the opposite, what can we do to help you.

We have new feature that we think are going to make your application better.

Features are going to simplify your architecture.

Things that are going to measure application more integrated.

A good example of that is something very simple.

Is how we changed, openURL.

So first, we made that asynchronous with a completion handler but we also added for universal links.

So you know universal link is when you open a URL, to a social network for instance, if you have a native app it’s going to open the native app.

But now, with this new API, you can actually check if there was an application on the system install.

So if there was not, you can stay in your application, you can still keep the experience instead of opening [inaudible] for instance.

So that’s going to really improve how you can integrate content and deal with those application in the system.

Another great feature, is the changes we made to Core Data.

The first one is something that we call, query generation.

The idea here is, it’s quite common to have a context for your menu I and that you’re doing work on background queues, in separate context.

And what you can do is query generation is, you can actually pin, your model on your menu i, you can control exactly when you want to move from one version, to another version of your model.

So it mean that you don’t have to deal with force anymore, because something changed in the background queue because you can decide exactly when the menu i should move to a new version of your data.

That’s a really great way to simplify your core data code in your applications.

The other feature is, about connection pooling.

Now in a store we have full support for a multiple reader, one writer mode.

That’s going to give you great performance enhancement.

We also have, new features on the tool side, but for that, I suggest you go to the What’s New in Core Data session on Friday.

Another way to model data in your application is to use CloudKit.

So CloudKit, we already had to hold for public databases, and private per user databases.

And what we are adding in iOS X is record sharing.

So you can have multiple users, accessing the same record.

So that’s going to open a completely new class of application with CloudKit.

So that’s great.

But something that is sometimes, difficult to do with collaboration based app, is how to deal with the UI flow.

How do you invite person to collaborate on a document?

How do you do that in a way that is going to be secure?

So we added the new UI cloud sharing controller.

Which is a view controller expose by UIKit and it’s really easy to use just pass the new share object that is going to control record sharing in CloudKit.

And we’re going to generate automatically for you that UI to manage the invitation flow, and the invitee list.

That’s really easy to build a collaboration based app in CloudKit.

We have an entire session on CloudKit, What’s New with CloudKit, that’s going to be in Presidio on Thursday.

Now, I’d like to talk about NS user activity.

NS user activity is a class that we use to capture the state, of your application.

And, we are using that already, for things like Handoff and Spotlight.

What we are doing now, in iOS X, is we are adding support for locations in user activity.

So if I’m in an application that deals with addresses, I can express that, the current activity for the user, is about this address.

So you just pass a map kit item you attach that to the activity and now you understand how I build that suggestion in the text field a few minutes ago.

The system will notice that, oh maybe this app, is dealing with these addresses.

So that’s probably a very good suggestion for a text filed about an address.

That’s going to open completely automatically for you, if you had options under activity.

We have many new ways you can use to increase usage of your application with what we call proactive suggestion which is this intelligence based, activity based, features in iOS X, on your applications.

Another place, where we use NS user activity is in search.

So in iOS 9, we added support for indexed activities.

It was real easy to use, we just create an activity and pass text field information and that would just show in Spotlight.

But now what you can do is, we have a new button here, search in app.

And, your user can now go back, directly to your app, and you can continue search automatically in your own application.

It’s real easy to adopt.

Step one, you need to add, a new key to your info plist to tell us that you support Spotlight continuation.

The second step is to implement a new UI application delegate method.

It’s real easy to implement.

We are giving you the user activity, you can build the UI and display the search, continued in your application.

The recent thing new also with search, last year, we added support for indexing your content with Core Spotlight.

But now you can actually search for your data in Spotlight.

We are opening the Spotlight index.

And that’s great because we have, a fully optimized implementation of that index for power and performance and we have very powerful quarry model.

So it’s real easy to use.

You just build a search query, you pass a request and you can search the Spotlight index for your data.

So you can integrate system indexed data and data from [inaudible] coming from a network for your application.

We have a new session, this year about search APIs that’s going to cover continuation, Core Spotlight search, and other features.

We shipped Replay Kit, that was a way to record your screen in your application.

In iOS X, you can use Replay Kit as a live stream.

So you can have live broadcasting, of your application and because we are also providing extension point, it mean that third-party services can give you that live broadcast ability.

And we are using that for instance, in playgrounds.

Another great framework is SceneKit.

SceneKit is really powerful 3D engine you can use to integrate 3D interactive component in your application.

The major new feature of this year is a completely new rendering engine, physically based rendering.

I could say that it now supports high dynamic range or is internally using a linear cross space.

But I think that’s better to show you.

This is your fire truck on iOS 9.

This is your fire truck on iOS X.

That’s really amazing.

[ Applause ]

And it’s now available on the Watch OS 2.

We have a full session of SceneKit Rendering in Presidio.

Then I’d like to talk about Apple Pay.

Two things about Apple Pay.

First, we are extending how Apple Pay is going to be available to your users.

Today, Apple Pay is on iOS and on the Watch, and as you learned yesterday, we are expanding that Apple Pay on the web.

We have support for Apple Pay on the Mac.

So for users, Apple Pay is everywhere.

But what’s great is, as a developer, we already have support for Apple Pay in your UIKit based application.

But now, you can use Apple Pay in Safari with the new Javascript framework.

But it could also mean that, you can use Apple Pay in your [inaudible] Safari view controllers.

The other place where you can use Apple Pay now, is in non-UI code.

We have a new API, for non-UI extensions.

And using Apple Pay in extension, is, really amazing, that’s a great feature for instant for your iMessage apps.

This is the kind of flow you can now build directly in messages with an extension.

I would like to point you to the two session on Apple Pay.

The first one that is unfortunately at the same time Apple Pay on the Web.

And just after that, there is What’s New in Wallet in Apple Pay.

I would like to, finish with how you can make iOS better.

How you can integrate with iOS.

We have major new extension points in iOS X, messages, Siri, CallKit.

We also improved existing extension, for instance something like keyboards, now you can use the same globe key, keyboard peeker from your keyboard extension.

We can also give you hints about the language, the user is interacting with.

So you can build multi-language support in your third-party extension.

Another thing that we improved in iOS X, is the new display modes for widgets.

So, before, you were just, you just had to pass us with you preferred size.

Now we have a user controlled size.

The compact mode, which is fixed height and the expanded mode which is variable height.

It’s really easy to adopt.

You don’t have to do anything, but we have new API so you can know when the widgets are going to move from compact to extended.

You can give us a maximum size, you can know the display mode.

You can customize the information you want to display.

There is one thing that, I’d like to point out about, widget though is, this is going to appear on the lock screen.

So, be careful about the kind of data you’re presenting here.

One thing to keep in mind is, be consistent with the kind of data you’re presenting in your widget.

If I configure, if I add your widget to my home screen, the lock screen.

I expect that I’m always going to see the same kind of data at the same level of privacy and that I’m not going to be surprised by thing that I don’t want to see on my lock screen when my iPhone is locked.

A completely new framework in iOS X is user notifications.

So user notification, that’s something that we used to have at the UI application level.

We have a new framework but, we are exposing the same features.

We are adding a few enhancement there.

First, we are unifying local and remote notification.

So much easier to deal with.

[ Applause ]

You have better control on how you’re going to deliver, how we are going to deliver notification to the user.

Another great thing is, we’re going to tell you, before displaying the notification.

So in-app, you have better control on the notification itself.

And, that’s also a unified model across all different platforms.

In addition to these API enhancements, we also have two extensions.

The first one is the service extension.

The service extension is something that you can insert, on the device when we are receiving the notification and before, it’s actually displayed.

So you can process the payload before it appears on the screen.

And, two great use cases, the thing you can build with that.

You can have media attachment, the payload is pretty small.

But if you know what you want to display for a given, payload, you can, when receiving that notification, fetch that and display.

And the other great feature is end to end encryption, because now you’re in charge, on both sides.

On the server and of the local device.

So you can receive something, and decrypt that in your own code.

We think that’s a great new feature and great new possibility for extensions.

[ Applause ]

This is the other extension we have for the user notification framework.

You can have embedded UI views in your notification.

There is one restriction though, you cannot directly interact, with the view, but you have the notification actions.

We have two session on user notifications, Introduction to Notification and Advanced Notification in Pacific Heights on Wednesday.

Another new framework in iOS X is CallKit and again, we are offering a new extension point here.

The directory extension, we let you configure how we should block incoming calls.

You have full control on that.

The second feature is, how we identify a phone number and display our label.

So when we receive a call, we first going to check in the address book and then if we cannot find a match, we can add your extension.

It’s real easy to use API.

It’s non-UI extension point.

It’s really two methods.

But the real great new feature in CallKit is the call provider API.

With the call provider API, we have a real first party experience for your voice over IP application.

Which mean that now, you can have something like full screen incoming call UI when you receive a call in your app.

But it’s not just that, we are fully integrating your application with a core management on iOS.

And things like, favorites and recents, we are automatically get, the calls from your application.

You can support Siri, you can support CarPlay, you will be integrated with do not disturb and Bluetooth.

That’s a major new feature for a mobile hone.

[ Applause ]

We can learn all about that in the Enhancing Voice Over IP apps with CallKit.

I’d like to talk about Siri.

So, you might not know about that.

But Siri is actually something that is different, depending on the context.

So if you’re using Siri, from the home screen with the home button, or if you’re using Siri with hey Siri, or if you’re using Siri with CarPlay, or as an accessibility feature.

Siri is going to tune itself to that context so you can have the best possible interaction with Siri.

It’s a quite complex task.

Because it’s involved the recognition phase, understanding the domain, understanding the environment, and that’s something that we are going to do for you for free when you build a SiriKit extension.

What we’re going to give you, is an intent representing the request, and that’s how your extension will interact with that.

And, you can also help Siri by giving your own vocabulary, which might be specific to your app, or specific to the user, so we can again, tune the recognition of the interaction with Siri.

So how did that work?

The idea behind intents extension is, it’s a non-UI extension that will represent that interaction between Siri and your application.

So it’s based on the concept of intents, coming from Siri and responses.

And the main idea here is, intents are a domain specific thing.

So you have the messaging domain, the payment domain, etcetera.

And for messaging, you have a list of intents that Siri can recognize and interact and send to your application for [inaudible] Siri interaction.

So the main goal here, for Siri intents, extension is to make sure that, you and Siri are on the same page when Siri understood a request.

That valid request, that we need to have a confirmation on that, let’s execute the request, that’s basically the flow.

And when you use Siri to say something like, “tell Miko on WWDC chat that we need to meet after this session,” what Siri is going to encode in the intent is the fact that, it’s in the messaging domain.

It’s the same messaging intent, there is a receipt in here, that’s about your app and we have content from you.

We have a second extension for Siri Kit that the intents UI extension.

It’s actually an optional extension.

But with that you can embed your own UI into Siri transcript so you can have a confirmation that is tune to your content.

That’s really great.

And, another great thing about the intent system is, it’s not just about Siri.

It’s a way to describe the request, of course we use that in Siri to communicate with your app.

But, that’s also how we integrate with CallKit, that’s how we do the right sharing extension in maps.

And the reason the feature that is, really great is that’s how you can donate information to the system about what you’re user is doing.

For instance, in your message app.

And it’s combined with and it’s user activity of course, and with that, let’s say that I am sending a message using my WWDC chat application, then in the contacts UI, we’re going to show the information about this contacts automatically.

And when we have, a list of actions, we can show you directly your application there, if you expose that information through an intent.

And at some point, your application will be just default application in the standard contact card.

That’s really amazing in term of adding visibility to your application.

I’d like to conclude with the iMessage apps.

So, messages is really becoming a platform, and we have two features here, one is the sticker packs and the other one is the messages extension.

So sticker packs, no code required, you just have to package your images, static or animated.

You use Xcode for that and you can just distribute that on the messages store.

The other thing you can do, with iMessage apps is use a new messages extension.

With that you can write, a fully UIKit based extension and you can do things like, dynamically generating stickers content.

You can even use a camera in that extension which is completely new.

And, you can tune your UI to have this compact or expanded mode and that’s going to live-in messages, that’s a great feature but you can do even more.

You can custom the bubble itself, it mean that your extension, using the messages framework can actually have, a concept of a conversation in messages integrate with that, connect with what the other user changed.

We have full support for messages extension in [inaudible] and iOS X.

That’s real easy to use and that’s great feature for you to integrate in messages.

The other thing that you can have with that is of course, completely custom content in your interactive bubble.

We have two session for iMessage applications, messages, apps, and sticker, that was this morning, and the second one, focusing on the extension part that’s going to be on Thursday at 1:40 p.m. Thank you very much, enjoy WWDC.

[ Applause ]

Apple, Inc. AAPL
1 Infinite Loop Cupertino CA 95014 US