What’s New in Swift 

Session 402 WWDC 2017

Swift 4 continues the evolution of the safe, fast, and expressive language, with better performance and new features. Learn about the new String and improved generics, see how Swift 4 maintains support for your existing Swift 3 code, and get insight into where Swift is headed in the future.

Good afternoon everyone.

[ Applause ]

Thank you.

Welcome to what’s new in Swift.

My name is Doug Gregor.

I’m here with some of my colleagues from the Swift team to talk about some of the great things we’re bringing in Swift 4.

Now, if you follow iOS Developer and author Ole Begemann, you actually know everything that’s already in Swift 4.

This is something I absolutely love about the Swift community.

What Ole did here is, he looked at all the proposals, what was going into Swift 4, and what did he do?

He built a playground to demonstrate how these features worked, and he shared it with the world so we could all learn from it.

This is awesome.

And it’s possible because at any point you can go over to Swift.org, our home of open source, and download a snapshot of the latest and greatest Swift compiler.

That snapshot is a toolchain that you can install into Xcode.

Provides new compiler, debugger, source kit, everything.

So, you can build your app against the latest tools.

Try out some of the new features, check whether we fixed your favorite bug.

Of course, this is all possible because Swift is open source.

We develop everything in the open on GitHub, so you can follow along.

You can participate if you’re interested.

And also, the way Swift evolves.

The standard library, the language is through this open evolution process where we evaluate individual proposals.

Refine them improve them, to make Swift better for all the development community.

Now as you undoubtedly heard by now Xcode 9 introduces refactoring support for Swift.

[ Applause ]

So, all fo the language level bits that make refactoring work for Swift actually lived down in the Swift project.

So, we’ll be open sourcing that soon.

The great thing about this is then you can check out and build the Swift source code, build your own refactorings and then through the toolchain mechanism I just talked about, try them out in Xcode.

All right, it’s a way of really working with your development tools.

Now, also part of our open source ecosystem is the Swift package manager.

So, this supports a growing ecosystem with over 7000 packages on GitHub.

This is extremely popular for server-side Swift, where Swift PM makes it really easy to grab the server components you need to build a server-side Swift app on Linux.

Now, the Swift Package Manager has seen a lot of improvements this year.

In better manifest API, better development workflow and so on.

And also, we’ve made a lot of progress toward our eventual goal of first-class support for Swift packages within the Xcode IDE.

And we’re getting closer to that with the use of Swift PM as a library, and of course, the new Xcode build system, builds entirely in Swift.

So, we’ve got a lot to cover today.

I’ll be talking about a couple of small refinements and additions to the language itself before we dive into the source compatibility story, we’re going to talk about how we can leverage all of the code that you’ve built in Swift, with Swift 4 and Xcode 9.

My colleague Bob will talk about Swift tools and improvements in performance, before Ben dives into strings, collections, and some of the generic features of Swift.

Finally, John will talk about exclusive access to memory.

Which is a semantic restriction we’re introducing into the Swift language to build for the future.

So, let’s start with one small little feature.

Access control.

So, here are I’ve defined this simple date structure.

And it’s similar to the one in foundation.

It’s going to use secondsSinceReferenceDate as an internal representation, but I’m making this private because this isn’t a good API to expose out to my users.

I want this type to be a good value type citizen so its equitable and comparable.

But already this code is feeling a little bit cluttered and messy.

I really should break this up into separate extensions; one for each task, right?

This is good Swift coding style, but Swift 3 didn’t support it very well, because you would get this error that you can’t reach across to a private declaration from another lexical scope.

You could fix this with fileprivate.

But that meant the whole file could see this member and that’s not quite right.

It’s too broad.

And so, Swift 4 refines this so that we expand the scope of what private means to only cover the declarations in all extensions of a particular type within that same source file.

This fits much better with the notion of using extensions to organize your code.

[ Applause ]

And with this change, let us never speak of access control again.

[ Laughter ]

Second, I want to talk about composing classes and protocols.

So, here I’ve introduced this shakable protocol for a UI element that can give a little shake effect to draw attention to itself.

And I’ve gone ahead and extended some of the UIKit classes to actually provide this shake functionality.

And now I want to write something that seems simple.

I just want to write a function that takes a bunch of controls that are shakable and shakes the ones that are enabled to draw attention to them.

What type can I write here in this array?

It’s actually frustrating and tricky.

So, I could try to use a UI control.

But not all UI controls are shakable in this game.

I could try shakable, but not all shakables are UI controls.

And there’s actually no good way to represent this in Swift 3.

Swift 4 introduces the notion of composing a class with any number of protocols.

[ Applause ]

It’s a small feature, but it just fits in nicely with the overall set of Swift.

Now, if you come from Objective-C background, you already know everything about what this does, because Objective-C has actually had this feature for a very long time.

Here’s a Touch Bar API where the client is in NSView that is also conformed to NSTextInputClient.

In Swift 3, we actually couldn’t represent that type, so we’d import it as an NSView, which is a little bit embarrassing.

So, Swift 4 corrects this and now we can actually import the type using an NSView that is an NSTextInputClient to map all of the APIs and appropriately.

[ Applause ]

So, there’s a huge number of great features we’re going to talk about today.

I want to call out a couple of features in the realm of improving on things we think of as Cocoa idioms.

KeyPaths, Key-Value Coding, Archival & Serialization that are big new Swift features that will be discussed in this other session, “What’s New in Foundation” on Wednesday.

And these features work beautifully with Swift value types.

So, you can use them throughout all of your Swift code.

In this session, they’ll also answer the aged old question, how do I parse JSON in Swift.

All right, let’s talk about source compatibility.

So, by its nature, Swift 4 is largely source compatible with Swift 3.

And the reason is the language hasn’t changed all that much.

We’ve made some refinements.

Like the change to axis control.

We’ve made some additions.

Like the change to class and protocol composition.

There’s also been improvements into the way that existing APIs in the SDK map into Swift.

They provide better Swift APIs than they did previously.

But the scale of such changes is much, much smaller than say from Swift 2 to 3, or even Swift 1 to 2.

And so, going from Swift 3 to 4 isn’t as big of an upset to a code base as it used to be.

And many of the features we’re talking about are purely additive.

So, they’re in some new syntactic space.

It doesn’t break code to introduce these new features.

That said, we want a smooth migration path.

So, we’re also introducing Swift 3.2.

Most important thing about Swift 3.2 is it’s not a separate compiler or a different toolchain.

It’s a compilation mode of the Swift 4 compiler that emulates Swift 3 behavior.

And so if some syntax or semantics change from Swift 3 to 4, it will provide the Swift 3 behavior.

Moreover, it understands the changes that have been made in the new SDK.

And so, if an API projects differently in Swift 4 than it did in Swift 3, it will actually roll back those changes to the Swift 3 view.

The end result here is that when you open up your Swift 3 project in Xcode 9 and build it with Swift 3.2, pretty much everything should just build and work the way it did before.

And this makes fantastic path to adopting the new features of Swift 4, because most of them are available also in Swift 3.2 as well as all of the great new APIs and frameworks in this years’ SDKs.

Now, when you’re ready to migrate to Swift 4, and then, in previous years we’ve always provided a migrator to take your code from Swift 3 and move it to Swift 4.

Now, unlike in previous years, this migration effort isn’t stop the world, get nothing else done until the entire stack has been moved forward.

The reason is Swift 3.2 and Swift 4 can co-exist in the same application.

And so, you can set which version.

[ Applause ]

You can set which version of the language you’re going to use on a per target basis.

So, if you want to migrate to Swift 4, you can migrate your app target, but leave all of your frameworks and all of your other dependencies in Swift 3.2.

That’s fine.

As your dependencies update and move to Swift 4, that’s perfectly fine, they can work with your app, whether it’s in Swift 3.2 or Swift 4.

The Swift Package Manager also understands this.

And so, it will build packages with the tools version that was used to develop the package and if a package supports multiple Swift language versions, that can be described in the manifest, so the Swift Package Manager will do the right thing.

Now, we think that with Swift 3.2 and Swift 4 co-existence, with the smaller amount of change from Swift 3 to Swift 4 that you’ll get a nice buttery smooth migration path to Swift 4.

And with that, I’d like to bring up Bob to talk about improvements to the build.

[ Applause ]

As the size and complexity of your Swift apps continues to grow, we’ve been investing in improvements in the build system to keep up with that growth.

Xcode 9 has a brand-new implementation of the build system.

Of course, it’s written in Swift and it’s built on top of the open source LLBuild engine.

It is really fast at calculating the dependencies between the different steps of your build.

You’re most likely to notice that when doing an incremental build of a large project.

This is a technology preview in Xcode 9.

We’d love to have you try it out.

So, go to the project or workspace settings in the file menu and choose the new build system.

Besides having a faster build system, another way we can use your system more efficiently is to avoid doing redundant work.

And Xcode 9 does this in a few different ways.

The precompiled bridging header speeds up the build of large mixed source projects.

The bridging header describes the interfaces in your Objective-C code so they can be used in your Swift code.

If you have a lot of Objective-C, the bridging header can be really large and slow to compile.

And parsing the contents of that header repeatedly for every one of your Swift files is wasteful.

The Apple LLVM compiler has a great solution for this, precompiled headers.

Xcode 9 will now use a precompiled version of the bridging header so that it only needs to be parsed once.

Apple’s music app is a great example where this helps a lot.

Music is a really large project and it’s split about evenly between Objective-C and Swift.

Using a precompiled bridging header, which is the default in Xcode 9, speeds up the debug build of music by about 40%.

Code coverage testing is another powerful tool, but in Xcode 8, it can also be a source of redundant effort.

Consider the common scenario where you make some changes to your code, you get it to build and then you want to run your tests with coverage.

Here’s what that looks like in Xcode’s report navigator.

Notice there’s an extra build.

Why is that there?

Coverage testing is implemented in the compiler by admitting extra instrumentation code to count the number of times each fragment of code runs.

With Xcode 8, the normal build does not include that instrumentation.

So, before you can run your tests with coverage, the whole project needs to be rebuilt.

In Xcode 9, we’re combining those builds.

If you have coverage enabled for testing, the normal build will include the instrumentation.

There’s a very small cost for this.

Less than 3% for one project that we measured.

But you get a huge benefit, because now you only need to build your project once in that scenario.

This next change is not actually about making your build faster.

It’s about avoiding this.


[ Applause ]

Indexing is great.

It’s a key to some of Xcode’s most powerful features like the new global rename re-factoring, but indexing in the background wastes effort.

Whenever you build your project the compiler needs to look up all the same symbol information that’s needed for the symbol index.

And so, now in Xcode 9 we will automatically update the index whenever you build your project.

There’s a very small overhead for that at the build time, but then there’s no need to repeat all that work again in the background.

So, we have a new build system and several different ways of using the system more efficiently that especially for those of you with large Swift projects we think are going to be a great improvement.

Let’s turn now and look at runtime performance.

Delivering high performance code has always been one of the key goals for Swift.

And with each new release of Swift, performance has increased.

The next step is to make that performance more predictable, more stable.

Let’s look at an example with Swift 3.

Here I have a simple protocol ordered with a comparison function, and another function that’s going to test that by sorting an array of values, using the comparison.

The code is written in a very general way.

It has to work with any value that conforms to the ordered protocol.

Even the different elements within the array could be different types.

Let’s look at the performance of this.

This graph shows the time in seconds to sort 100,000 arrays of 100 elements each.

And it’s measuring for different sizes of array elements.

So, for a one-work struct, it takes a little less than 2 seconds to do those sorts.

If for some reason the size of the values increases to 2 words, the time increases only very slightly.

And if it grows to three words, it continues on that same trajectory.

What about if we have four words.

We hit a performance cliff.

It’s nine times slower.

What just happened here?

To understand this performance cliff we need to delve into the implementation of Swift.

If you’re interested in this, I recommend you watch Understanding Swift Performance from last year.

For now, I’ll just give a quick summary.

To represent a value of unknown type, the compiler uses a data structure that we call an existential container.

Inside the existential container there’s an in-line buffer to hold small values.

We’re currently reassessing the size of that buffer, but for Swift 4 it remains the same 3 words that it’s been in the past.

If the value is too big to fit in the in-line buffer, then it’s allocated on the heap.

And heap storage can be really expensive.

That’s what caused the performance quit that we just saw.

So, what can we do about it?

The answer is cow buffers, existential cow buffers [laughter].

No not that kind of cow.

COW is an acronym for copy on right.

You may have heard us talk about this before because it’s a key to high performance with value semantics.

With Swift 4, if a value is too big to fit in the inline buffer, it’s allocated on the heap along with a reference count.

Multiple existential containers can share the same buffer as long as they’re only reading from it.

And that avoids a lot of expensive heap allocation.

The buffer only needs to be copied with a separate allocation if it’s modified while there are multiple references to it.

And Swift now manages the complexity of that for you completely automatically.

What’s the impact on the performance?

It’s much more stable.

Instead of taking over 18 seconds to sort those forward structs it’s now only a little more than 4 seconds.

It’s a gentle slope instead of that steep cliff.

This improvement applies to the case where the compiler is dealing with values where it doesn’t know the type at all.

But a similar issue comes up with generic code, where the type is parameterized.

Let’s look at that.

In many cases, the compiler’s able to make generic code fast by using specialized versions for specific types.

But sometimes the compiler cannot see the specific types and then it needs to use unspecialized generic code.

That can be much slower.

It’s another form of a performance cliff.

Until now, Swift has used heap allocation for generic buffers in unspecialized code.

And as we’ve just seen, heap allocation can be really slow.

Swift 4 now uses stack allocated generic buffers.

So, we get similar improvement for unspecialized generic code.

Now making Swift performance really predictable is an ongoing effort.

But Swift 4 has made big strides to fix some of the worst of those performance cliffs.

Another dimension of performance is size.

As your apps grow larger and larger, code size is becoming increasingly important.

One way to make code size smaller is to avoid unused code.

Let’s return to Doug’s example of a date struct.

As with any value type, it’s a good idea to make it conform to the equatable and comparable protocols.

But what if it turns out that your app isn’t using one of those.

You shouldn’t have to pay for code that you don’t use.

In Swift 4, compiler will automatically optimize away conformances that are used so you don’t pay the price for that.

And note that this interacts with other optimizations such as de-virtualization and in-lining that exposes other opportunities for the compiler to remove unused conformances.

So, this is optimization the compiler can do completely automatically.

That isn’t always possible.

Let’s look at another one.

Here, I have a very simple class with two functions.

The compiler will generate those functions and because in Swift 3 this is a subclass of NSObject, the language will automatically infer, the compiler will infer the objc attribute.

What that means is these functions should be accessible from Objective-C.

And so, the compiler will generate thunk functions that are compatible with the Objective-C conventions and that forward to the Swift functions.

Now, functions within Swift are still called directly.

In my example, show calls print.

And what that means is that the thunk functions often end up being unused.

But because they are exposed to the Objective-C runtime the compiler has no way to tell that they’re unused.

Fixing this requires changing the language model.

And so, in Swift 4, the objc attribute is only inferred in situations where it’s clearly needed.

Such as when you’re overriding an Objective-C method or conforming to an Objective-C protocol.

This change avoids a lot of those unused thunks.

When we adopted this in Apple’s Music App, it reduced the code size by almost 6%.

When you have a set of functions that you want to make accessible and Objective-C, we recommend that you put them in an extension and mark the extension with the objc attribute.

This guarantees that all those functions will be available to your Objective-C code.

And if that’s not possible for some reason, the compiler will report an error to you.

So, what does it take to adopt this change?

Doug mentioned the migrator tool to help you move your code into Swift 4.

With objc inference the migrator offers you a choice.

If you don’t care that much about code size, the migrator can easily match the Swift 3 behavior by simply inserting the objc attribute wherever it would previously have been inferred.

But with just a little more effort, you can take advantage of the code size improvements by using minimal inference.

If you go with that option for minimal inference, the migrator will start by finding all the places where it can determine the objc attribute is definitely needed.

And it will do that, it will insert that automatically.

That may not be sufficient because the migrator is unable to detect issues across separate Swift modules or in your Objective-C code.

So, to help you find those places, the migrator will mark the thunk functions that are inferred as deprecated.

And you can then build your code and run your code and look for deprecation warnings.

Let’s look at that more closely.

Here’s an example of a build warning.

I’ve got a Swift function to show the status of my view controller.

And I’m calling that from my Objective-C code.

But because I’m still relying on the objc inference, I get this warning about it being deprecated.

To fix this, I need to go and find the place in my Swift code where the function is defined and add the objc attribute.

Some of the issues may not be visible at build time.

In Objective-C, it’s possible to refer to a function in ways that can’t be detected until runtime.

And so, for that reason, it’s also important to run your code, run all of your tests, exercise as much of the code as you can, and look on the console in Xcode’s debug area for messages like this one.

Telling you that you need to add an objc attribute.

Notice, that the message there shows you the exact source location where the function is defined.

So, you can just go to that location and add the attribute.

Once you’ve fixed all the build and runtime warnings, go to the build settings for your project.

Change the Swift 3 objc inference setting to default.

And with that the migration is done.

It’s really not that hard.

We did this for Apple’s Music App, there were only a total of about 40 places where an objc attribute needed to be added in a really large project.

And more than 30 of those could be handled completely automatically by the migrator.

This change to limit objc inference as well as the optimization of removing unused protocol conformances both help to reduce your code size.

I’m going to tell you now about another change that has an even bigger impact on the overall size of your app.

Besides the instructions and data that make up a compile Swift app, the symbol tables in Swift frameworks occupy a lot of space.

Swift uses a lot of symbols and the names are often quite long.

For example, in Swift 3.1 almost half of the standard library is taken up by symbols.

As shown by the darker blue bar here.

In Swift 4, much less space is needed for symbols.

So, even though there’s a lot more content in the standard library.

The total size has actually decreased.

We’ve accomplished this by making the name shorter and also by stripping out the symbols.

Both the static linker and the dynamic linker use a separate try data structure to quickly look up symbols.

And so, what that means is that the Swift symbols are rarely needed in the symbol table.

Xcode 9 has a new build setting, Strip Swift Symbols that’s enabled by default.

You can turn this off, if it causes problems for your workflow.

But Xcode normally runs the symbol stripping as part of archiving your project.

So, this feature has no impact on earlier stages of development.

And in particular, it should not interfere with normal debugging or profiling.

If for some reason you want to examine the symbols that are present in a binary, after it’s been stripped.

You can use the DYLD info tool with the export option to look at the exported symbols.

This build setting applies to the code that you build in your project.

The Swift standard libraries are handled separately.

They’re stripped as part of App Thinning.

It’s important to understand this because if you want to measure the size of your app, you really need to go through Xcode’s distribution workflow and export your app.

And when you do that, you’ll see there’s a new setting that you can use to control whether or not to strip the symbols from the standard libraries.

You can turn it off, but we recommend that in most cases you leave this enabled, because it will provide a significant reduction in the size of your app.

Next, Ben’s going to come up and talk about what’s new in strings, collections and generics.

[ Applause ]

Thanks, Bob.

So, we’ve got some really great features in the standard library and generics in this release.

And I’m going to start with strings.

Strings in Swift 4 make processing characters faster and easier, while still having the same goal they’ve always had of helping you write Unicode correct code.

So what do we mean by Unicode correct?

Well, a lot of it comes down to what we mean when we talk about the character.

In most programming languages a character is just a number, and some encoding.

In older systems that might ASCII.

These days, it’s probably one of the Unicode encodings.

So why does that matter.

Let’s look at an example.

So, the single letter é, with an acute accent in Unicode can be encoded in two different ways.

One way is with a single Unicode scaler, E9.

The other way is by following the plain letter E with the combining acute accent modifier.

These two ways of encoding the same letter are what Unicode calls canonically equivalent.

You ought to be able to use either one without it making any difference.

So, what can that mean in code.

Well, when the default way of a language of looking at strings is to look at the individual code units in the string.

You can get some very odd behavior.

This example is in Ruby, but we see similar behavior in otherer languages like Java or C.

We can create two strings in two different ways that ought to be exactly equivalent.

To a user, they look identical.

But if we do things like count the number of characters, we get different results.

And if we use the default comparison operation, they’re not equal.

This can cause some really hard to understand and diagnose issues.

And that’s why strict Swift takes a slightly different approach.

In Swift, a character is what Unicode cools a grapheme.

A grapheme is what most users would think of as a single character when they see one on the screen.

And in Swift, no matter how you compose a particular grapheme, it’s one character, and two differently composed equivalent graphemes compare as equal.

Now, the logic for breaking up a string into graphemes can get quite complicated.

For example, the family emoji is made up by combining adult emoji with child emoji.

And in Swift 4 because we’re using Unicode 9 graphing breaking that’s built into the operating system, this counts as one character.

But this complicated logic has a cost.

And in previous versions of Swift, you were paying for this cost for every character you processed.

Even simpler ones.

In this release, we’ve added fast path for those simpler characters in many different languages.

That means that processing these simpler characters in a string as you go through it should take about a third of the time it did in previous versions.

These fast paths are robust to the presence of more complicated characters.

So, for example if you were processing messages on social media, that was mostly simple plain text, but with some emoji mixed in, we only take the slower more complex path to process the emoji.

Now, let’s look at that emoji example again.

There’s two things to notice about this.

One is that graphemes can be of variable width.

So, we clearly can’t have random access to a particular grapheme in a string.

We can have random access to a particular code unit, and you can still get at those in Swift strings.

But what does that mean, it doesn’t mean anything in this example to access the fifth code unit.

It’s certainly not the fifth character.

The other thing to notice is that there’s a bit of an unusual behavior.

And we’ve appended six items to a string, but when we were done, the count hadn’t gone up.

And that’s not normally what you’d expect from other collections like arrays.

And it was because of edge cases like this that it was felt that strings shouldn’t be collections in previous versions of Swift.

Instead, you used to have to access the characters as a collection for the character’s property on the string.

But this actually really wasn’t helping anyone understand the issues it was trying to avoid.

All it was doing was cluttering up code.

It was dissuading people from thinking in terms of characters and from using the standard library to do their string processing.

So, in Swift 4, strings are a collection of characters.

And that helps clean up code like this, a lot.

[ Applause ]

Now, there’s one other thing we can simplify here.

In string processing it’s very common to want to slice from an index to the end of a string.

There’s a shorthand in Swift 4 for that.

You can leave off the end of a range whenever you’re slicing a collection and that means from the index, to the end of the collection.

And there’s a similar syntax for going from the start up to an index.

Making strings collections means they have all of the properties you’re use to in other collections, so you can zip them, map them, search or filter them.

This makes building up string processing a lot simpler.

We’ll look at an example.

Supposing you want to detect whether there was a country flag in a message in your app in order to trigger some logic.

Country flags in Unicode are made up of pairs of special regional indicators that spell out the ISO country code of the flag.

So, the Japanese flag for example is JNP.

We can add an extension to Unicode scaler to be able to detect whether it’s one of these special regional indicators.

Next, we can extend character in order to detect whether the character is a flag.

This is using a new property that’s available in Swift 4 that lets you access the underlying Unicode scalers that make up the graphic.

This is actually a really useful thing to play around with, if you want to learn more about how Unicode works, especially in a Swift playground.

Now that we have this, we can use it with all of the familiar collection API’s.

So, we can search if the string contains a flag.

Or, we can filter out just the flags into a new string.

So, now that strings are collections, you might notice that they have a new method, split.

Which is an existing method on collection.

It breaks up a string into an array of slices.

But if you run it on Swift, in Swift 4, you’ll notice it doesn’t return an array of strings.

The slice type in Swift 4 for strings is a substring.

So, why did we give it a different type?

Well, there’s some fundamental engineering trade-offs to be made when deciding how slicing on collections ought to work.

When you’re slicing a collection, should it make a copy of the elements that you’re slicing out, or should it return a view into the internal storage of the original collection.

From a performance point of view, sharing storage is clearly going to be faster.

As Bob mentioned earlier, allocating and tracking heap memory can be very expensive.

You could easily spend at least half of the time in an operation like Split, making copies.

What’s more, if slicing takes linear time, because we’re making copies of the elements, then a loop that was performing slicing operations might accidentally be quadratic, instead of running in the linear time you might be expecting.

So, that’s was slicing of any collection in Swift needs to happen in constant time.

But that shared storage approach that we use instead also has a downside.

And to understand what that is, let’s look at the internal implementation of string.

So, currently in Swift strings internally are made up of three properties.

They have a pointer to the start of the buffer.

They have a count of the number of code units in the buffer, and they have a reference to an owner object.

The owner object is responsible for tracking and managing the buffer.

And this is a familiar pattern if you know on copy on write works in other collections.

When the original string struct is destroyed, the reference count and the owner object drops to zero, and the [inaudible] on the class frees the buffer.

Now, let’s have a look at what happens when we create a substring.

So, supposing we sliced off just the word well from the original string.

The substring now has a start that points to the W, has a count of five, and the owner is a shared reference to the original strings owner.

Now what happens when that original string goes out of scope.

The owner’s reference count is decremented, but it’s not destroyed because it’s being shared by the substring.

So, the buffer isn’t freed.

And that’s good because the substring is relying on the buffer.

But the entire of the buffer remains.

Not just the part, the substring it’s relying on.

Now in this case, it’s no big deal, it’s just a few characters.

But it can be a real problem.

Supposing you downloaded a giant blob of text from the internet, then you sliced out just a small part of that text and assigned it to some long-lived variable like a UI label.

This ends up looking like a memory leak.

Because the original giant blob of text’s buffer never gets freed.

This was actually such a big problem in Java that they changed the behavior of slicing on strings a few years ago to make it make copies.

But as we’ve seen, that has a performance downside, and we don’t necessarily want to make that tradeoff.

The natural solution for a problem like this in Swift is to use a type.

And that’s why substrings are a different type to strings.

Now, when you’re performing your slicing operation on the original large string, you’ll end up wanting to assign a substring to a string and the compiler will tell you about it.

If you apply the fix it, it’s going to suggest then you’ll create a new string, and that will copy just the part of the buffer that you sliced.

And that allows the original buffer to go out of scope and be freed up.

[ Applause ]

So, now we’ve got two different types.

You might ask the question when should I use substring in my code.

And the answer is, you probably shouldn’t very often explicitly.

When defining interfaces, like methods on types or properties, you should prefer to use string, both to avoid the memory leak issues we just talked about, but also because string is what we call the common currency type.

It’s the type that everybody expects to see in APIs.

Most of the time, the only time you will encounter a substring type is when you’re performing a slicing operation.

And because Swift uses type inference, you won’t actually name the substring type at all.

Substrings have many of the same methods and properties as regular strings.

So even though you’re not naming the type as a substring, much of the code will operate just the same as if it was operating on a string.

And if you don’t actually need to create a string because you’re only doing local operations, then that can be avoided altogether.

So, that’s almost it for strings.

There’s one last feature we want to talk about.

And that’s multiline string literals.

Previously, these were a real pain to write.

You had to write one big long string literal with embedded slash ends in it.

Swift 4 introduces the triple quoting syntax.

You start your multiline string with a triple quote.

[ Applause ]

And then you end it with a triple quote.

The indentation of the closing triple quote is what determines the indentation for every line of the multiline string.

You can see here, because we’ve put our literal inside a function, we want it to be nicely indented to match the formatting of the rest of our code.

The rule is, whatever indentation you use on the closing quote, you need to include at least that much indentation on every line of the string.

Then, when the code is compiled, that indentation is stripped off.

This is a really nice feature.

And one of the cool things to mention about it is it’s being both proposed and implemented by external members of the Swift open source community.

[ Applause ]

So, that’s it for string.

Now, let’s talk about some of the new generics features.

With each version of Swift, we’ve been refining the generic system.

Both to make it more powerful, but also to make it more usable and approachable.

Such as with protocol extensions that came in Swift 2.

In this release, we are introducing two features.

Where clauses on associated types, and generic subscripts.

And I’m going to show you a couple of examples of how we’ve used them in the standard library to give you an idea of how you might be able to use them in your code.

So, supposing you wanted to detect whether every element of a sequence was equal to a particular value.

You can do this with a contains method that already exists on sequence.

But, that code is a bit clunky.

You have to write the sequence doesn’t contain any element not equal to the value.

If you are writing this over and over again, that might get pretty annoying and protocol extensions give you a really nice way to wrap up code like this into helper methods that neaten up your code.

So, we can wrap this code inside an extension on sequence that gives us something much more readable to call.

Now, when you’re extending sequence like this, there’s one thing that’s slightly annoying.

And that’s that you used to have to write Iterator.Element to refer to the type of the elements of the sequence.

In Swift 4, you can drop the Iterator.

because sequence has an element type of its own.

Now, this might seem like a really easy features we have added.

But we actually couldn’t do it without the ability to constrain associated types.

And I’ll show you how.

So, in Swift 3, we had a protocol sequence, it had an associated type iterator.

And the iterator had an associated type for the element.

In Swift 4, we added the associated type Element and then we add a where clause to the Iterator associated type to require that its element is the same as sequence’s element.

Otherwise they could get out of sync and that would cause difficulties.

We’ve used this in several places in the standard library.

So, for example, previously there was no guarantee the elements of a subsequence were the same type as the elements of a sequence.

No one would ever want to write a sequence where that wasn’t the case.

It would be impossible to use.

But, still it wasn’t guaranteed by the compiler, because we have no way of expressing it in the language.

Now, with where clauses on associated types we can make that guarantee.

So, what does this mean for your code?

Well, if you’ve even extended sequence or collection yourself, you’ve probably found that you had to add all of these seemingly unnecessary where clauses to your extension in order to guarantee that it would compile because the body was relying on things the protocol didn’t guarantee.

Now, because we’ve done the things you’ve seen in the previous slides, it can guarantee that.

And so, you’ll get warnings telling you that you now have redundant constraints.

These are just warnings in both Swift 3 and Swift 4 mode, and all they’re telling you is that they’re unnecessary and you can neaten up your code, which you can do at your own pace.

Now, there’s one more thing to know about these new constraints that we’ve added.

And that’s that this is one of the few things that is not backwardly compatible in Swift 3.2 mode.

Because protocol performance has to be consistent across the entire program.

So, if you written your own custom collection types, that might happen to violate some of these constraints, you’ll have to resolve those issues before you can compile with the new compiler.

We think this is a pretty rare thing to happen.

It’s usually an oversight and it’s usually easily resolved.

But it’s something to be aware of if you have done this.

So, finally, let’s talk about generic subscripts.

Earlier, we saw an example of a one-sided range syntax.

So, how did we actually implement this internally within the standard library?

Well, first, there’s a new type partial range from.

It looks a lot like a regular range, but it only has a lower bound.

Next, there’s a protocol range expression, which we’ve used to unify all of the different kinds of range types.

It has a method that takes a collection and uses that to turn any range expression into a concrete range type that can be used for slicing.

For example, partial range from uses the collections end index to fill in the missing upper bound.

Now that we have that protocol, we can extend string with a generic subscript that will take any kind of range expression and use that range expression to slice a substring.

But because strings now collections, we’re actually able to put this feature directly on collection.

And that includes any custom collections that you might have written, which get this feature, automatically via the protocol.

We were actually able to clean up a lot of code in the standard library this way because we could remove all of the duplicated slicing operations we had to hard code for each different range type we wanted to support and replace them with a single generic subscript.

And we hope that you can find similar ways to use generics to clean up your code as well.

So, there are loads of new features I didn’t get a chance to cover today, like some new numeric protocols.

And some really cool enhancements to the dictionary type.

One of the things we’ve added is a new method on collections that allows you to swap two elements in the collection, given two indices instead of using the global function that takes two arguments in out.

And this was to support a new feature of exclusive access to memory that John’s going to talk to you more about now.

[ Applause ]

Thanks, Ben.

Exclusive access to memory is a new rule that we’re adding in Swift 4.

It’s really the first part of a much larger feature that we call ownership.

Ownership is all about making it easier to understand the performance of your program.

It’s going to make it easier for you to optimize your program by eliminating unnecessary copies and retains when you need to.

But it’s also going to enable us to make Swift, faster by default in a number of cases.

And ultimately, it’s going to enable some really powerful new language features for creating safe and optimally efficient abstractions, before we can do any of that we have to make it easier to reason about memory.

And that means enforcing exclusive access to memory.

So, what do I mean by that.

Let’s walk through an example.

Often it happens that I’m iterating over a collection like this and I want to modify each element as I go.

This is a pretty common pattern.

So, I’m going to go ahead and extract that into a method.

Now that I’ve got this method I’ve got a generic operation that I can use to modify any mutable collection.

Each element at a time.

This operation is iterating over a set of indices that it captures at the start of the iteration.

So, this only really works if nothing within this operation actually modifies the set of indices by adding or removing elements from the collection.

But I can pretty clearly see nothing in this method is actually modifying the collection.

Right? Well, okay I do call this closure that was passed in.

And a closure is arbitrary code.

But again, I look at this method, and I think to myself OK I’m only giving this closure access to a specific element of the collection, not to the entire collection.

So, I should know that nothing is allowed to modify the collection while this operation is underway.

Unfortunately, in Swift 3, that’s not really how the language works.

Let’s go back to that code that I had where I was calling my method.

What if instead of just multiplying the element by 2, I try to access the numbers variable.

There’s nothing stopping me from doing that.

I can just, at any point in this closure, remove something or add something to the variable, to the array while I’m iterating over it in another method.

As soon as I do this, one thing immediately stands out to me.

It’s a lot harder to reason about what’s going on with this numbers array.

It used to be the case before that I could just look at each individual function in my program and think about what it individually is doing to each variable that it has access to.

And that’s great.

That’s one of the best properties of what we call value semantics.

That you get this kind of isolation in each program, each part of your program.

Everything composes together and you don’t have to reason about everything all at once.

But unfortunately, because we can do things like this we get a sort of reference semantics like affect, where you all the sudden have to reason about your entire program together in order to understand what’s going on.

Here, when I do this sort of thing, I’m going to run past the end of the array.

Now, it’s pretty easy for me to go ahead and try to fix that in my method, by instead of iterating over the set of indices at the start of the collection, I’m going to reload the index each time through, and compare that and that way, if I move something from the end, everything is going to work.

But, oh, there has to be another way.

A better way.

Because I’ve made my code so much uglier, and a little bit slower.

And you know if that were good enough, that would be a reasonable tradeoff, right.

You know, lose a little bit of performance, and making your code a little bit uglier is OK, if it leads it to it being more correct.

But that’s not really good enough here.

Let’s go back to this closure again.

What if instead of removing one thing at the end of the loop, I actually just wipe the entire array out before I even access the element.

What is this even accessing at this point?

Where is this going?

I’m assigning to something that doesn’t really exist anymore because the array doesn’t have any elements in it.

That’s a really good question and in order to answer it we need to dig into the implementation of array a little bit.

Array is a copy on write value type, implemented with a reference counted buffer.

At the beginning of the loop, numbers is pointing to exactly to that buffer.

For performance, Swift really wants to bind the element variable that’s passed into the closure directly to the memory within that buffer.

But that creates a problem because when we do this assignment, numbers is no longer pointing to that buffer, which means it’s no longer keeping it alive.

Swift has to be a safe language.

We don’t want to just leave this as a dangling reference.

So, something has to be keeping this buffer alive in order for this to not end up crashing the program.

How can that work?

Well the way it works is that Swift actually implicitly creates a reference to the buffer for the duration of the subscript operation.

And that makes things not crash.

But it creates this extra performance penalty that we’re hoping with the optimizer will clean up on every single time that we subscript into the array.

So, this kind of nonexclusive access to memory creates problems at multiple levels.

Cascading problems in your program.

It makes things harder to reason about.

It makes your code less general and harder to prove correct.

And it creates performance problems both in your part of the program and for Swift, when it’s trying to optimize these general data structures.

The solution is that we have to have exclusive access to memory.

What do I mean by exclusive?

Well it’s OK to have two different parts of the program that are reading from the same variable at the same time.

But, when I have something that’s writing to the variable, it’s very important that nothing else be accessing at all.

The thing that’s writing to the variable should be exclusive.

And that’s it.

That’s the rule.

That’s the new rule that we’re adding in Swift 4.

So, how do we enforce it?

Well, in most cases, like in our original example, Swift is actually capable of enforcing this at compile time.

Here I’m calling a mutating method on numbers.

That initiates a right to it for the duration of the call.

When I come along later within that call, and call another mutating method on it.

I’ve got a conflicting right that violates the rule.

And Swift can just see that that’s happening at compile time and tell you about it immediately.

Now, that’s generally going to be true in most common value semantics sorts of situations.

But there are some situations where that’s not possible, generally because of some sort of reference semantics.

Either a global variable, or some sort of shared memory like a class property.

So, let’s go back to our original example.

Here, numbers was a local variable.

But what if instead it were a class property?

Well, the situation is still basically the same.

Here, I’m calling a mutating method on a class property.

And within the closure, I’m calling a mutating method on the same class property.

But they are on objects that the compiler can’t reason whether they’re the same object or not.

In general, the point of class types is that you can move them, copy them around, share them throughout your program.

And use them wherever you like.

But, that means that the compiler can no longer really tell you conclusively whether or not any particular function all or access like this is actually accessing the same object.

So, the compiler has to be conservative.

Now, it would be prohibitive if we just banned all this sort of thing all the time.

So, instead we do the check dynamically.

Which means that we’ll get an error like this at run time.

But only if they’re actually the same object.

Of course, if they’re different objects, they’re considered the two class properties on them are considered different memory.

And there’s no conflict.

Now this enforcement that we do is for performance reasons only done within a single thread.

However, the thread sanitizer tool that we make available in Xcode will catch this sort of problem even across threads.

There’s a great session later this week.

I strongly encourage any of you to go to about finding bugs like this with Xcode.

This is a Swift 4 rule.

Like we said yesterday in the State of Union, Swift 3.2 is all about allowing your existing code to continue to work.

So, in Swift 3.2, this is just a warning.

However, because Swift 4 and Swift 3 need to interoperate in a future version of Xcode we are going to have to make this an error even in Swift 3 mode.

So, we strongly encourage you to pay attention to these warnings and fix them.

Because they’re just warnings you can fix them in your own time.

And at your own pace, but you should take them seriously.

We’re really looking forward to the power this is going to bring.

It’s going to make it so much easier to reason about code.

It’s going to enable a lot of really amazing optimizations, both in the library and in the compiler.

And it’s also going to make it you know a lot easier for us to deliver tools that you can use to take advantage of to optimize your own code in ways that are going to be really great.

If you’re interested in reading more about what we’re planning on doing with this, there’s an ownership manifesto on the Swift website that I would encourage you to check out.

Now, one caveat.

The developer preview that we’ve given you this week, some of this stuff is still in progress.

There’s a lot of information in the release notes.

I really encourage you to check those out.

We really would like you to go ahead and make sure that all of this stuff is enabled.

And let us know if you run into any problems.

And that’s it for what’s new in Swift.

There are a lot of great new refinements in addition to the library, and to the language.

We’ve got a great new Swift type and we’ve optimized a lot of stuff.

And we’ve really done a ton of work on the tools and improving the performance and the code size of your code.

I hope you have a great WWDC, and thank you very much for coming.

[ Applause ]

Apple, Inc. AAPL
1 Infinite Loop Cupertino CA 95014 US