Security and Your Apps 

Session 706 WWDC 2015

Your reputation depends on protecting your users’ data. Do this by taking advantage of the Security frameworks built into iOS, OS X and watchOS. Find out about new developments in Security and learn the best practices to develop secure apps and protect your users’ data.


IVAN KRSTIC: My name is Ivan, and I head the Security and Privacy Strategy group at Apple, and today we are going to talk about security.

And, in fact, most of today’s talk will focus on security on devices.

Now, as you know, we have a very strong lineup of device security features, some of them unique to OS X others to iOS, and some present on both our platforms.

We are continually working to make these features better, and you’re going to see us continue to relentlessly innovate in this space.

But I’m actually here today to have a brief interlude with you about network security.

When we think about network security most of you think of HTTPS, which most of us experience as the lock icon in the browser.

It used to be that websites used HTTPS and TLS if they were transmitting sensitive information.

But we no longer think of things like credit card information as sensitive.

In fact, today we think of all kinds of user information as being sensitive, and even some things that you as a developer may not think of as sensitive, a user may.

One other really important thing about TLS that we don’t often think about is that it doesn’t just protect the secrecy of information as it moves across the network.

It also protects the integrity of those connections.

And the threats on the network have changed.

It’s no longer just someone wearing a black ski mask hiding on some corner of the Internet trying to get your credit card numbers.

In fact, users may want to protect themselves against other kinds of threats.

For example, Internet service providers that are injecting tracking headers into every request, or even outright recording browsing histories to support ad targeting.

So in 2015 we believe that TLS is really a minimum base line for responsibly protecting customer information on the network.

But TLS is not quite enough.

Many servers still use version 1 of TLS, which is very old, it’s 16 years old.

And the newest version of TLS, 1.2, which is 7 years old, contains a number of really important cryptographic improvements to the protocol, which make it more resilient to the kinds of threats that we are seeing today and will be facing in the future.

And in fact, even TLS 1.2 is not quite enough.

With the way TLS works, if an attacker is able to compromise a server and steal your TLS key, they can use that key to retroactively decrypt all previously encrypted data by that server.

That’s obviously very undesirable.

There’s a property called forward secrecy that makes it not true.

With forward secrecy, if an attacker is able to compromise a server and steal the TLS key, they can only use that key to decrypt future traffic that the server encrypts, but no prior traffic.

And this is great because it mitigates bulk recording of network encrypted data.

TLS supports forward secrecy and it does it through the use of what are called cipher suites which are combinations of cryptographic primitives that you actually have to enable on your server.

So your server need only not run just TLS, but in fact needs to run TLS 1.2, and in order to get forward secrecy, you must configure it to do that.

So why am I telling you all of this?

Well, here is a quote from Tim, and I will let you read it.

“We must get this right.”

And to help you get it right, we’re introducing a feature called App Transport Security.

Here’s how it works.

If you link your app against OS X El Capitan or iOS 9, by default it won’t be able to make any unprotected HTTP connections; they will simply fail, and any TLS connections that your app makes will have no adhere to these best practices that I just described.

And that’s TLS 1.2 with forward secrecy.

Cryptographic primitives that are known to be insecure are not allowed, and there are minimum key size requirements put in place.

Now, you may not be able to comply with all of these restrictions immediately, and that’s okay, you can specify exceptions in your Info.plist either on a case-by-case basis for each domain or as a global override.

But, as an industry we have to very soon get to a point where no user information hits the network unencrypted, and we need your help.

Thank you for that and I will turn it over to my colleague Pierre who’s going to tell you about System Integrity Protection.



Hi everyone, my name is Pierre-Olivier Martel.

I’m the engineering manager for Sandboxing Technologies here at Apple.

And today I’m here to talk to you about System Integrity Protection which is a new hardening mechanism that we are introducing in El Capitan.

Before I dive into the details, I would like to take a step back and provide some context around what we’re trying to achieve here.

My team’s mission at Apple is to make sure that our users get to enjoy the great user experience that comes with our product, with the confidence that their personal data is protected and that malware or even simply a poorly written piece of software, only has a limited impact on their user experience.

So to that end we’ve designed and integrated various pieces of security technologies, as Ivan mentioned before, over the year in iOS and OS X.

One of the quotas and principles that we applied there is the concept of defense in depth.

Now, the key principle here is something you’ve probably heard before, which is that security is all about layers.

And for the same reason that you shouldn’t put all of your eggs in the same basket, you shouldn’t rely on a single layer of protection to defend the device, because no matter how bulletproof, or water resistant, or shock absorbent this layer is, when it starts failing you, then it’s complete game over.

Instead, you should rely on multiple layers of protection, ideally with different security properties that will delay the advance of an attacker and reduce your attack surface.

Now, the concept of defense in depth is an old military concept that’s been used to defend fortresses all around the world for centuries for now.

Because I you know you guys like trivia, I can tell that you it was actually formalized by this gentleman, Sebastien de Vauban, in the 1670s, when he was asked by the King of France to rebuild all the fortresses around the country that were used to defend the kingdom.

And you may not be familiar with the character, but maybe you’ve seen some of his work before.

That’s the design of one of his castles.

You can clearly see several layers of protection here that are designed to stop different kinds of attacks and that will basically delay the attacker and funnel him through different bottlenecks that are easier to defend.

So let’s see how this applies to the OS X security model.

And I don’t know about you, but I’ve always dreamed to build my own fort, so there it is, and we’ll put our own security layers on it.

Starting from the bottom, we’ll start with Gatekeeper.

So Gatekeeper makes sure that an application that gets downloaded from the Internet onto the user’s machine has to be signed by a we have a Developer ID certificate that Apple has issued to a developer, otherwise the application does not get to launch.

And combined with some other mechanism that we have on the system, like some malware detection mechanism, then it’s actually a pretty effective measure to stop a massive malware attack on our ecosystem.

The second layer is Sandbox.

So back in Lion, we introduced App Sandbox and we mad it mandatory for applications coming from the App Store.

We also highly recommended it for applications coming outside of the App Store, like for instance using the Developer ID program.

Sandbox is a containment mechanism.

Which means that it makes sure that even if your application gets exploited, then the application only has access to the data that the user actually gave it, which means the application cannot steal all the user’s data, and cannot compromise the rest of the system.

The third layer, if you manage to go through or around the first two, is the classic POSIX permission scheme, which means that your application only runs with the set of privileges that the system granted to your user.

So the application won’t be able to access data owned by a different user and it won’t be able to modify a systemwide configuration settings that are usually owned by the root user.

And finally, we can think of the Keychain as yet another layer on top of that, which is designed to protect the user’s secrets.

It relies on cryptography and application separation to make sure that only the application that’s stored a secret in the first place can get back to it later.

So when you look at the big picture here, you realize a couple of things.

First, Gatekeeper will stop untrusted code downloaded on the machine from being launched.

But it’s not actually a containment mechanism.

It doesn’t stop the application when it’s run from doing anything.

Also, it does not protect code that’s already on the machine.

So code that ships with the OS, for instance, is not protected by it.

Then, sandboxing, although it’s probably the most effective containment mechanism we have on the platform, is only an opt-in in OS X.

So there’s no requirement for every single process to actually run in a sandbox.

Finally, when you look at the POSIX layer, you realize that, well, most of the Macs out there are actually single user systems where the user is the de facto administrator running with administrative privileges all the time.

The root account is usually protected or hidden behind an often weak password or no password at all.

And in fact, if there is a password and you ask for it, the users likely give it to you.

And finally, when you root, then you actually have full control on the machine, because root can disable all security measures on the device.

It can replace the kernel extensions, it can replace launchd, any other security services, it can even interfere with the Keychain layers that sit on top of it.

So the reality is that once you have code running on the Mac, it’s actually not that hard to become root, and once you are root, you have full control of the machine.

Which means that any piece of malware is actually one password, or one vulnerability away from taking full control of the device.

This shows us that we need another layer.

We need a layer that will eliminate the power of root on the machine and protect the system by default, as it was installed by Apple on the machine.

Both on disk and at runtime.

And because we are talking about taking some power away from root, then we need to provide a configuration mechanism that root itself cannot compromise, but that can give away this power back to root.

This is what System Integrated Protection is.

It is a new security policy that applies to every single process running on the system.

Regardless of whether this process is running with extra privileges, or if it’s running unsandboxed.

It’s designed to provide extra protections to system components, both on disk and at runtime, and it makes it so that system binaries are only modifiable by the installer, if it installs an Apple signed package, or the Software Update mechanism.

And finally, that the system binaries are protected from runtime attachments, and from code injection.

So before we dive into the details here, let’s see how this is going to impact you the developers.

Well the good news is that if you ship your application on the App Store, then your application is not impacted by any of this, because the App Store guidelines and the app sandbox policy, already prohibits all of these behaviors.

However, if you ship outside of the store, then your application is potentially impacted by this, if it relies on being able to modify system binaries or frameworks on disk.

If it needs to be able to install content in system locations, and I will explain a little bit about what system locations mean in here.

And finally, if your application needs to inspect the memory states of any process on a system including system processes, or if it needs to be able to inject libraries or debug over processes, including system processes.

So let’s look at the key aspects of this new mechanism.

First, we look at new filesystem restrictions that we are introducing in Capitan, and then we’ll see how these extend to new runtime protections.

Finally, we’ll see how this all ties in with the kernel extension development workflow and how it potentially impacts you if you are a kext developer.

And then because the feature can entirely be disabled, I will just show you how.

Let’s talk about the filesystem first.

What we’re trying to achieve here is we want to protect system content from being modified.

To do so, the installer will actually flag system content as it installs the files on disk.

We have a new filesystem flag that we introduced in Capitan.

And then later on, at runtime, the kernel will stop any attempt at modifying these protected files or protected folders unless the attempt comes from a specially entitled process, of which there’s only a handful on Capitan.

It will also stop you from writing to the block devices that back this protected content.

And it will stop you from mounting over this protected content.

And one thing you have to keep in mind is that for now, this only applies to the root, and the boot volume of the currently running OS.

So you should see this as a way for the system to protect itself at runtime.

Now, because we are trying to protect system content on disk, we need to have a clear separation between system content and third-party content.

So in Capitan, all the locations that are on the left of this chart will now be considered system locations.

Which means that the system will actually stop you from writing there, including if it comes in an installer package.

So it should not impact many of you here because we’ve been advising for many years now, that you shouldn’t write to these locations.

But it’s not going to be a hard failure in Capitan if your package installs content in there.

So if you install anything in /System, you need to move this content into the appropriate subfolder of /Library, if it’s supposed to be system wide content, or (tilde)/Library if it’s supposed to be user content.

If you install anything in /bin, or /sbin, or anywhere under /usr like /usr/bin, /usr/lib, /usr/libexec, then you need to move this content into the appropriate subfolder of the /usr/local folder, because that’s the only location that is now available to third-parties.

And then a reminder that the best location for your content is still /Applications because that’s the location that is visible to the user and it’s easy for them to actually drag your application to the trash and just remove all content.

One important note here, that when the user upgrades from a Yosemite install, to a Capitan install, the installer will actually migrate any third-party content that it finds in system locations outside of these locations.

So you need to make sure that today you start migrating that content away as soon as possible so as not to impact these users.

Now, let’s look at the runtime protections.

Being able to modify the behavior of a process at runtime is more equivalent to being able to modify the binary on disk.

So if you try to protect the binary on disk and the system content on disk, we have to make sure that it’s not possible to inject code or modify the behavior of these system processes.

To do so, we are introducing a new restricted flag in the process structure that the kernel holds for every single process.

And a kernel will set that flag at exact time if the main executable is protected on disk, or if the main executable is signed with an Apple-private entitlement.

And then later on, the system will actually treat these restricted processes slightly differently than regular processes.

For one, the task-for-pid and the processor-set-tasks SPI will now fail if they are called on a restricted process.

And will set an 0 to EPERM.

Which means that if part of your product relies on being able to attach to a system process at runtime, for instance, the Finder, and that you expect to be able to inject code into the Finder, that is not going to work anymore.

Then if you fork an exec, a binary that will result in the child process being restricted, then the system will automatically reset the mach special ports on this child process, which means you won’t be able to keep control over the child process.

So if you expect to be able to fork a privileged tool and then keep control on it, that’s not going to work anymore.

The linker is going to ignore all the dyld environment variables on these protected binaries.

So if you expect to be able to inject the library into a system binary when you exec it, the linker will just ignore the new library.

And finally, if you use dtrace, all dtrace probes that target a restricted process will not be matched anymore, which means you won’t be able to see an interaction between the process and the kernel.

You won’t be able to inspect the restricted process memory space and you won’t be able to inspect the kernel memory.

Of course, this applies to our own debugger, LLDB.

If you try to invoke lldb even as root and try to attach to the Finder, then this is going to fail.

Now, when it comes to the kext signing program, I’m sure you know by now, all extensions have to be signed with a Developer ID for Kexts certificate that was issued by Apple, and then these extensions have to be installed into /Library/Extensions.

The new thing here is that because we are pulling the kext signing program under the System Integrity Protection umbrella, the kext-dev-mode boot-arg is now obsolete.

If you are a kext developer, you need to be able to test with unsigned kernel extensions, you will need to disable this protection, and I’ll show you how in a minute.

But it also means that, this comment line here that you probably saw out there to disable kext signing, is not doing anything anymore.

So let’s talk about the configuration mechanism.

We strongly believe that this new mechanism, this new protection, is critical for our users.

That being said, we realize that it gets in the way of people who want to have complete control over their machine, and because of what I said before, because it protects the kernel on disk and requires all kernel extensions to be signed, then it also gets in the way of kext developers who want to be able to test with unsigned kext.

So because of that, it can be entirely disabled.

The configuration is stored in your NVRAM setting, which means that it applies to the entire machine.

So if you have several installs of Capitan, they will all be configured the same way and it’s persistent across OS install.

So as you move from seed 1 to seed 2 up to GM and even later, the configuration will persist.

So we won’t reset it every time we install.

Now, because root can actually set a NVRAM setting and we can’t trust root to do the right thing here, it means we cannot have the configuration mechanism in the OS itself.

So we actually pulled it and installed it in the Recovery OS.

So this NVRAM setting can only be set in Recovery.

If you want to change the configuration, you need to reboot your machine in Recovery OS, and you can do so by holding the Command+R key on boot.

Then all you have to do is launch the Security Configuration application from the Utilities menu, and check the System Integrity Protection box, apply and reboot.

Keep in mind that these types that I just describe are likely to change in an upcoming seed.

So make sure you read the release notes to know what the new steps are.

So let’s summarize what we discussed so far.

System Integrity Protection is a new policy that applies to every process on the system.

It protects the system as it was installed by Apple, both on disk and at runtime, by restricting write access to system locations, and by preventing runtime attachment and code injection on system processes.

The installer will actively migrate third-party content outside of system locations, so make sure you actually migrate your content as soon as possible or that you fall back gracefully when you can’t find it.

Then finally, the feature can be disabled using the configuration mechanism that is in the Recovery OS.

That’s it for me.

Thank you very much, guys.

I will leave the stage to Andrew.


ANDREW WHALLEY: Thank you, Pierre.

I’m Andrew Whalley, and I manage the data security group within Core OS Security Engineering.

You heard about app transport security, and how it helps protect data in motion, through the network connections your app makes.

I’m going to look at the various ways that you can protect data at rest.

I will touch on the Keychain and storing user secrets, look at Touch ID and how you can use it to balance security and convenience in your app.

Along the way, I will be looking at existing technologies as well as what’s new in iOS 9 and how they can fit together to deliver a level of security appropriate for your apps.

So let’s start with a quick overview of the Keychain.

You can think of it as a very specialized database.

You store data by adding rows, which we call Keychain items, and then query for them with the attributes.

It’s optimized for small secrets and by secret I mean a password, a token or cookie, or cryptographic key.

If you have tens of thousands, thousands of megabytes to store, consider using file-based data protection or a bulk encryption through an API like Common Crypto and then just store the key in the Keychain.

These SecItem APIs have been around a long time, but they’re still the best place to store secrets, including in your new Swift apps.

So here we have a secret and we want to store it in the Keychain using SecItemAdd.

To do that, we construct a dictionary which includes both the secret and some attributes describing how to find it in the future and what protection it should have.

This pattern of creating a dictionary to describe or query for an item, is also used by the calls to query, delete, and update an item, as well as some of the other APIs, I should be talking about later.

There’s a lot more about Keychain APIs in Session 709 from 2013’s WWDC.

So here’s some things to consider when you are writing code to access the Keychain.

The first is dealing with user secrets is a really security sensitive part of your code.

So you should factor it into small, simple, testable units.

Often this is done with a wrapper class.

Whether you are using it directly or through a wrapper, make sure it has the highest level of protection that your application can use.

We describe and talk about data protection classes, which are the times at which cryptographic access is provided to those items.

For example, when the device is unlocked.

In iOS 8, we added the AfterFirstUnlock I beg your pardon.

That’s been around for a while, but you can use it if you have to access items in the background, for example, if you are a VoIP app.

Always accessible is going to be deprecated in iOS 9.

So you need to start moving items out of that into a higher level.

We have WatchKit 2, your Watch apps now have access to the SecItem APIs, though entering a full user name and password on the Watch is probably not the user experience you want.

If your Watch app is displaying data from a paired iOS device, consider just sending that content across and not a persistent credential.

If your Watch app does need direct access to a credential then consider rather than storing a full user name and password, have your server send you a token or a cookie that only has the permissions to access the content that the focused functionality of your Watch app requires.

It’s not just on the Watch that user name and password prompts can be inconvenient, and over the last few releases we have introduced a couple of technologies to help you prompt for passwords less often.

The first is shared web credentials.

We all know and love Safari saved passwords and how it will suggest and store them for us.

With iCloud Keychain, passwords will be synced and autofilled on all of your devices.

However, it’s common for a service to have both a website and an iOS app.

So it would be great to have those applications participate in Safari saved passwords and with shared web credentials you can.

Here’s some code you might want to include in the sign-up or the registration flow of your application.

SecCreateShared WebCredentialPassword will return a random string in the same format that Safari uses for it’s suggested passwords.

Can you can call SecAddSharedWebCredential to let Safari know that there’s a new user name and password to autofill for a specific domain.

In iOS 9, we’ve made it easy to give this functionality a try by relaxing some of the security checks when you’re running in the Simulator.

So let’s have a look at that.

Here I’m going through the registration flow of my app to which I just added the code I showed you a few moments ago.

After that, just pop in to the settings for Safari and make sure that name and password autofill is enabled.

Go to Safari and see the results.

Here we are back in the app, being presented with a user name and password prompt.

Shared web credentials allows the application to display a picker, which lists all the accounts Safari has saved for that domain.

When the user has picked one, the user name and password is returned to your app in the completion handler for SharedWebCredential and you can then log the user straight in.

If you want to use this on device and not just in the simulator, you’re going to have to add an entitlement to your application, and you can do this in the Associated Domains section of the Capabilities tab in Xcode.

You are also going to need to put a JSON file up on your server, but you might have this already.

As it’s the same one used for Handoff, as well as in iOS 9 app links.

One change we have made in iOS 9 to make adoption even easier is you no longer need separately sign that file.

It’s going to be protected with a secure TLS connection.

I mentioned that Safari saved passwords uses iCloud Keychain, but you can also use it directly in your own apps.

Imagine you have an iOS, iPad, iPhone, and OS X App Store app and you want to make logging into one the same as logging into them all.

So for all of your apps passwords that can be used on multiple devices, consider just adding the synchronizable attribute to all SecItem calls.

There are a few things that you’ve got to think about, for example, deleting an item will delete it everywhere, so make sure you only do that under the right circumstances.

There are a few more caveats, and you can see the, in SecItem.h. If you are interested in finding out more about the security of iCloud Keychain and how synced passwords are never accessible to anyone but the user, see the iOS Security White Paper.

There’s a link at the end of the session.

So to recap, on the Keychain, store all your secrets there.

There really is no place for passwords in plain text files or plists.

Protect them at the highest level possible and if appropriate use SharedWedCredentials like our Keychain to sync them around your user’s devices and present fewer password prompts.

So iCloud Keychain is great for the secrets that can go on multiple devices but sometimes you want them to stay firmly and securely on just one device.

An example might be a secure messaging app, where encryption is device to device rather than user to user.

The various protection classes I already mentions have a ThisDeviceOnly variant.

Items will be backed up, but only ever restored to the device they came from.

Last year, we added the WhenPassCodeSet class, that ensures items are always protected by a local device passcode.

And you can use AccessControl lists for even finer grained control over items.

So since we are talking about protecting device-specific credentials, let’s look at the security domains on an iOS device.

User space, where your application runs, and the kernel, which as Pierre mentioned provides process separation and other security functionality.

But it also provides many, many other OS facilities, which means it has quite a large attack service.

So with iPhone 5s, we added Secure Enclave.

Secure Enclave is a separate on core that has been architected from the ground up with security in mind.

That’s where we put Touch ID to ensure the privacy and the security of your fingerprint biometrics.

We also moved the KeyStore component from the kernel into Secure Enclave and it’s that component which controls the cryptography around Keychain items and the data protection.

So let’s focus for a moment on Touch ID.

While we think of it as a security technology, where it really excels is convenience.

You can unlock your device without having to enter your passcode all the time, but that itself can give us some security benefits.

For example it’s now much easier to have a long, complex passcode which improves the security of data protection.

Or your phone can now lock immediately so it’s vulnerable and unlocked for as small a time as possible.

In iOS 8, we provided some APIs so you can use Touch ID in your own applications as you balance security and convenience.

But why two and how do they differ?

Well to understand you’re going to have to know how Touch ID and biometrics security works.

But luckily, it’s very simple.

It boils down to an if statement.

If a presented finger matches one that’s been enrolled, then do something.

It’s what that something is, and where it happens that makes the difference.

Let’s start with Local Authentication.

A finger is placed on the Touch ID sensor and is matched within the Secure Enclave.

With Local Authentication, knowledge of the match just a Boolean success or failure, is sent to your app.

And it’s there that your app does something with that knowledge.

So while the process started in Secure Enclave, ultimately, it’s the application in user space where a security decision is being made.

So when might you want to use Touch ID with Local Authentication?

Think about your app and if you have any security barriers, such as requiring a passcode to confirm an operation, even though the user is already logged in.

Touch ID at that would be much easier.

Or maybe you always wanted to have an extra step in an authentication process but without Touch ID, it would have been really just too much of a barrier.

So, for example, you could prompt before viewing, especially sensitive data.

Or before an operation like permanently deleting an account.

One pattern is to have a Touch ID prompt early in your application’s flow.

But this can lead to a situation where the user has just unlocked with Touch ID, and then moments later has been prompted again by your application.

In iOS 9, we have added the TouchIDAuthentication AllowableReuseDuration, which is a property on the Local Authentication context.

You can use this to specify a window during which you won’t need to prompt again if the user has recently unlocked their device with Touch ID.

So that’s one way to make your policy a little more lax.

But maybe you want to make your policy a little more strict in some situations.

For example, by reacting to a new fingerprint being enrolled.

Also new, we have the evaluatedPolicyDomainState property.

It’s a totally opaque value that represents the current set of enrolled fingers.

All you can really do with it is compare it over time.

If it changed, then a finger has been added or removed to the enrolled set in settings.

If you detect that, and it’s appropriate for your application, maybe you can prompt again to see if users still wants to use Touch ID on your application, or require a password to reenable it.

So let’s recap what’s new for Local Authentication in iOS 9.

I have already mentioned AllowableReuseDuration, and PolicyDomainState.

You can also invalidate a Local Authentication context and if a Touch ID prompt is currently being presented to the user, it will behave as if the user had hit canceled and tear down the dialogue.

evaluateAccessControl allows Local Authentication to be used with Keychain Access Control Lists, I will touch on that later and there’s also a lot more examples in some example code we are releasing today.

So talking of Keychain Access Control Lists, that’s the second way you can use Touch ID within your application, by using it to protect specific Keychain items.

Here’s our architecture diagram again.

As before, the Touch ID match occurs in Secure Enclave but this time, knowledge of the match is sent within Secure Enclave to the KeyStore.

Only then will your Keychain item be released back to your application.

So this is useful in you want to add protection to a particular saved credential.

And to take advantage of additional security around Secure Enclave.

Maybe at the moment you’ve looked at the security tradeoffs of your particular application and there are some things you really don’t want to save, you’re prompting every time or very frequently.

Maybe using Access Control Lists you can save it, protect it with Touch ID and provide a more convenient experience.

Or you could use it to increase the security of something you’re already saving.

You create access control lists by specifying two security properties.

The first is effectively the data protection class.

That’s when the data is cryptographically accessible to the Secure Enclave.

Next, you specify a policy.

The policy describes a condition that must be met before Secure Enclave will release the item even if it has cryptographic access to it.

So let’s have a look at the policy types.

The first is .UserPresence.

This will prompt for a Touch ID match and fall back to the device passcode.

And you can go straight to .DevicePasscode as well if you want.

New in iOS 9, it’s .TouchIDAny.

This will require a Touch ID match and there is no fallback.

Also new, and stricter still, is .TouchIDCurrentSet.

With this, items will only be released if the set of enrolled fingers has not changed since the item was stored.

I’m going to focus on this because it might be that your application could benefit from this providing a form of multifactor authentication.

When you think about multiple factors, you often talk about something you know, like a password, and something you have and carry with you, like a physical token, a smart card, or maybe an iOS device with Secure Enclave and Touch ID.

If you store an item, protected with the TouchIDCurrentSet policy, there’s no way to get at the item without a successful Touch ID match.

There’s no fallback.

And if maybe an adversary has the device passcode, they can’t go into settings, enroll a finger, and then have access to that item.

The last two policies, .ApplicationPassword and .PrivateKeyUsage allow you to implement some advanced functionality that goes beyond Touch ID.

The first is ApplicationPassword.

To help illustrate, let’s look at how an item in the WhenUnlocked class is cryptographically protected.

In the same way that even the best door lock is useless if you’ve left your key in the door, encrypted data isn’t actually protected if stored alongside the key.

The security of data protection and the Keychain comes down to the device passcode and that’s stored in one of the most complex systems we know, the user’s brain.

They remember it, enter it into the device, and we take that passcode and derive a cryptographic key and it’s that, which decrypts the item.

Now, let’s look at an item protected with ApplicationPassword.

Just the device passcode is no longer sufficient.

Your application has also got to provide its own password.

Again, we derive a cryptographic key from it, and it’s only when the device passcode and the app password are both present that access is granted to the Keychain item.

As I mentioned, if you store the password on device or it’s baked into your app, this really doesn’t offer any additional protection.

So you have to think about where off that device you can store that password.

Maybe it’s up on a server, which can implement its own policy about when it’s released back to your app.

Or perhaps you have a physical accessory and you want to prove the user has it.

If the accessory can’t give your app a password, then it can’t decrypt a Keychain item and you know it’s not present.

To use ApplicationPasswords, you create both a AccessControl list and a Local Authentication context.

The first says please use application password for this item, and the second specifies the password itself.

You then take both of these and add to them to the dictionary you passed the SecItem call.

This is an example of using a Local Authentication context in conjunction with the Keychain item.

So that was ApplicationPassword.

The final new policy, it’s Private Key Usage.

Here’s the diagram from earlier, where we saw a Keychain item being released from the KeyStore in Secure Enclave, back to your application.

This obviously is needed if you take that password and use it to log into a server, but it exposes the password it a potentially compromised user space.

So wouldn’t it be great if there’s a way to keep the secret inside Secure Enclave but still have it be usable?

And there is, using asymmetric cryptography.

There we don’t just have a single key, but a key pair.

A public key that doesn’t require any protection, and a private key we have to keep safe.

Using this requires the SecKey API and the details that are somewhat beyond the scope of the few minutes I have left, but here is an overview just so you can see the flow.

Calling SecKeyGeneratePair will cause the private key to be stored in Secure Enclave, if you specify some new additional parameters.

But the public key is still returned to your application to store.

If you try and retrieve the private key, using SecItemCopyMatching, you can get a reference, but cannot get the actual data out of Secure Enclave.

So how do you actually use it?

Well, you might have some data you want to be signed.

So you can call SecKeyRawSign and pass the data into Secure Enclave, and if you’ve set up the private key to be protected by Touch ID, only after a successful match will the private key be used to sign that data, and have it returned to your application.

So one place you might want to use this is strengthening Touch ID as a second factor.

I’m going to give an example flow, but bear in mind that there are a lot of intricate details in writing cryptographic protocol.

So please don’t take this to be too prescriptive.

First of all, for the enrollment flow, you would generate a key pair, and send the public key back to the server, along with a user’s regular login details.

The server would record the public key as being associated with that user, and that’s enrollment.

Later on, when the server wants to verify that you’re logging in from the same physical device that you were before, it can send a challenge to your application, which in turn calls SecKeyRawSign.

The user will present their finger and have a Touch ID match.

That will result in the challenge being signed and you can send it back to the server.

And the server can then use the public key it previously stored to validate the signature.

Just a few more details on this topic.

The supported keys are Elliptic Curve P256, the private key is not extractible in any form, even protected, and the applications are RawSign and RawVerify.

So in summary, I have given an overview of the Keychain, and some situations where you might want to use it.

I talked about the technologies we have to avoid presenting the user with password prompts.

Looked at the two Touch ID APIs, Local Authentication and Keychain Access Controls.

And told you about some new advanced features you can build on top the new things we added to those APIs iOS 9.

App passwords and Secure Enclave protected private keys.

I’m always fascinated to find out what wonderful things you are going to make using these new APIs and how you can better balance security and convenience in your own applications.

There’s some more information available online, including the iOS Security White Paper I mentioned earlier.

There’s more information about app transport security on Thursday’s Networking with NSURL Session, session.

And come and visit us tomorrow morning and Thursday morning in the Security and Privacy labs.

Thank you very much, indeed.


Apple, Inc. AAPL
1 Infinite Loop Cupertino CA 95014 US