Swift Generics

Session 406 WWDC 2018

Generics are one of the most powerful features of Swift, enabling you to write flexible, reusable components while maintaining static type information. Learn about the design of Swift's generics, including how to generalize protocols, leverage protocol inheritance to express the varying capabilities of related types, build composable generic components with conditional conformances, and reason about the interaction between class inheritance and generics.

[ Music ]

[ Applause ]

Hi, everybody.

I'm Ben.

I work on the Swift standard library.

And, together with my colleague Doug, from the compiler team, we're going to talk to you about Swift generics.

So, the recent releases of Swift have added some important new features, including conditional conformance, and recursive protocol constraints.

And, in fact, with every release of Swift, we've been refining the generic system, making it more expressive.

And, we feel that the 4.2 release marks an important point.

It's the point where we can finally fully implement a number of designs that have always been envisioned for the standard library.

Something that's critical for us in achieving our goal of API stability for Swift.

So, we've given a lot of talks about generics in the past, but we haven't taken a step back, and talked about generics as a whole for a while.

So, today, we're going to take you through a few different features of the generics system, both new and old, to help understand how they fit together.

I'm going to briefly recap the motivation for generics.

We're going to talk about designing protocols, giving a number of concrete types, using examples taken from the standard library.

We're going to review protocol inheritance, and talk about the new feature of conditional conformance, and how it interacts with protocol inheritance.

And finally, we're going to wrap up with a discussion of classes and generics.

So, why are generics such an important part of Swift?

important part of Swift?

Well, one way of seeing the impact is by designing a simple collection, like type.

We'll call it buffer, and it's going to be similar to the standard library's array type.

Now, the simplest possible API for the reading part of a buffer might include a count of the number of elements, and a way to fetch each element to the given position in the index.

But, what do we make that return type?

Now, if we didn't have generics, we'd have to make it some kind of type that could represent anything that we'd want to put inside the buffer.

You can call that type ID, or check, or void star.

In Swift we call it Any, which is a type that can stand in for any different kind of type in Swift.

So, if you wanted to handle anything in the buffer, you could have subscript return an Any.

But, of course, you probably know that that leads to a really unpleasant user experience.

At some point, you've got to get out that type from inside the box, in order to actually use it.

And, this isn't just annoying, it's also error-prone.

What if somewhere in your code, maybe by accident, you put an integer into what was supposed to be a buffer of strings?

But, it's not just about ease of use, we also want to solve some problems relating to how these values are represented in memory.

Now, the ideal representation for a buffer of strings, would be a contiguous block of memory, with every element held in line next to each other.

But, with an untyped approach, this doesn't work out quite so well, because the buffer doesn't know in advance what kind of type it's going to contain.

And so, it has to use a type like Any, that can account for any of the possibilities.

And, there's a lot of overhead in tracking, boxing, and unboxing the types in that Any.

Here, I might have just wanted a buffer of integers, but I have no way of expressing that to the compiler.

And so, I'm paying for flexibility, even though I'm not interested in it.

interested in it.

What's more, because Any has to account for any different kind of type, including types that are too large to fit inside its own internal storage, it has to sometimes use indirection.

It has to hold a pointer to the values, and that value could be located all over memory.

And so, we really want to solve these problems, not just for ease of use and correctness, but also for performance reasons.

And, we do it using a technique called parametric polymorphism, which is just another term for what we in Swift refer to as generics.

With a generic approach, we put more information on the buffer, to represent the type that the buffer is going to contain.

We'll call that type Element.

Element is a generic parameter of the type, hence the term of parametric polymorphism.

You can think of it kind of like a compile-time argument, that tells the buffer what it's going to contain.

Now it has a way of referring to that element type.

It can use it wherever it was previously using Any.

previously using Any.

And, that means that there's no need to do conversions when you're getting a type out of the buffer.

And, if you make an accidental assignment of the wrong kind of type, or some issue similar to that, the compiler will catch you.

Now, now there's no such type as buffer without an associated element type.

If you try to declare a type like that, you'll get a compilation error.

You might find that slightly surprising, because sometimes you'll see that you can declare types like buffer without any element type.

But, that's just because the compiler is able to infer what the element type ought to be from the context.

In this case, from the literals on the right-hand side here.

The element is still there, it's just implicit.

This knowledge of exactly what type a type like buffer contains is carried all the way through both compile and runtime.

And, this means that we can achieve our goal of holding all of the elements in a contiguous of the elements in a contiguous block of memory, with no overhead, even if those types are arbitrarily large.

And, because the compiler has direct knowledge at all times of exactly what element type the buffer contains, it has optimization opportunities available to it that it wouldn't otherwise have.

So, in the case here, where I've declared a buffer of integers, a loop like this ought to be compiled down to just a handful of very efficient CPU instructions.

Now, if you were writing a loop like this, on a regular basis, to sum up a buffer of integers, it might make sense to extract it out into a method.

An extension on buffer that's more unit-testable, and more readable when you actually call it.

But, you probably know that if you've written code like this, you'll get a compilation issue, because not all element types can be summed up like this.

We need to tell the compiler more about the capabilities the more about the capabilities the element needs to have, in order to make this method available on a buffer.

Now, the easiest way to do that is by constraining the element type to be a specific type like the int from our original loop.

If you take this easy approach to get up and running with your extension, it's easy to generalize it later, when you find you need to do something different, like sum up a buffer of doubles, or floats.

Just look at the type that you've constrained to.

Look at the protocols it conforms to, and follow them up until you get the most general protocol that gives you everything that you need to do your work.

In this case, the numeric protocol, which gives us the two things we're relying on here, the ability to create a new element with a value of 0, and the ability to add elements to it, which come as part of the numeric protocol.

Now, let's talk about that process of factoring out protocols from various types.

So, we've been talking about So, we've been talking about this buffer type, and we can make it generic across different elements, but what about writing generic code that's generic in a different direction?

Of writing code that works on any different kind of collection?

Such as an array that's very similar to our buffer type, but also more varied types, like a dictionary that's a collection of key value pairs, or maybe types that aren't generic [inaudible] the different element types, like data or string that returns specific element types.

We want to create a protocol that captures all of their common capabilities.

We're going to create a cut down, simplified version of the standard library's own collection protocol.

So, notice that we considered a varied number of concrete types first.

And now, we're thinking about a kind of protocol that could join them all together.

And, it's important to think of things as this way around.

To start with some concrete types, and then try and unify types, and then try and unify them with a protocol.

What do those types have in common?

What don't they have in common?

When you're designing a protocol like this, you can think of it kind of like a contract negotiation.

There's a natural push and pull here, between conforming types on the one hand, that want as much flexibility as possible in fulfilling that contract, and users of the protocol, that want a really nice, tight, simple protocol in order to do their extensions.

That's why it's really important to have both a variety of different possible conforming types, and a number of different use cases in mind when you're designing your protocol.

Because it's a balancing act.

So, let's start to flesh out the collection protocol.

So, first we need to represent the element type.

Now, in protocols, we use an associated type for that.

Each conforming type needs to set element to be something appropriate.

In the case of buffer, or array, as of Swift 4.2, this happens as of Swift 4.2, this happens automatically.

Because we also named their generic parameters to be element as well.

This is a nice side benefit of giving your generic arguments meaningful names that follow common conventions like the word element, rather than giving them something arbitrary like T that you'd have to separately state was the element type.

For other data types, you might need to do something slightly more specific, for example, a dictionary needs to set the element type to be the pair of its key and value type.

Next, let's talk about adding the subscript operation.

Now, if we were talking about just a protocol for types like array, we might be tempted to have subscripts take an int as its argument.

But, making subscript take an int would imply a very strong contract.

Every conforming type would have to supply the ability to fetch an element's given position that was represented by an integer.

And, that works great for types like array.

It's also definitely easy for users of protocol to understand.

But, is it flexible enough for a slightly more complicated type, like a dictionary?

Now, no matter how you model it, a dictionary's probably going to be backed by some fairly complicated internal data structure that has specific logic for moving from one element to the next.

For example, it could be backed by an internal buffer of some kind, and it could use an index type that stored an offset into that buffer, that it could then take as the argument to subscript in order to fetch an element to position, using that offset.

But, it would be critical that the dictionary's index type be an opaque type that only the dictionary can control.

You wouldn't want somebody necessarily just adding 1 to your offset.

That wouldn't necessarily move to the next element in the dictionary.

It could move some arbitrary, maybe uninitialized part of the dictionary's internal storage.

So, instead we want the dictionary to control moving dictionary to control moving forward through the collection by advancing the index.

And so, to do that we add another method that given an index, gives you the index that marks the position after it.

Once you take this step, you need a couple more things.

You need a start index property, and an end index property.

Because a simple count isn't going to work anymore in order to tell us that we've reached the end.

Now that we're not using ints as our index type.

So, let's bring those back to the collection protocol.

So, we've got a subscript that takes some index type to represent a position, and gives you an element there.

And, we've got a way of moving that position forward.

But, we also need types to supply what kind of type they're going to use for their index.

We do that with another associated type.

Conforming types would supply the appropriate types, so an array or a data would give an int as their index type, whereas a dictionary would give its own custom implementation that handles its own internal logic.

So, let's go back to count that we dropped a minute ago in order to generalize our indexing model.

It's still a really useful property to have.

So, we probably want to add it back as an extension on collection.

Something that walks over the collection, moving the index forward, incrementing a counter that it then returns.

Now, if we try and implement this, we had another missing requirement.

Since we moved off of int to a general index type, we can no longer assume that the index type was equatable.

Ints are, but arbitrary index types aren't necessarily.

And, we need that in order to know that we've reached the end.

Now, we could solve this in the same way that we did earlier, of constraining our extension, saying that it only works when the index type is equatable.

But, that doesn't feel right.

We want a protocol to be easy to use, and it's going to get use, and it's going to get really irritating, if we have to always, on every extension we write, put this constraint on there.

Because we're nearly always going to need to be able to compare two indexes.

Instead, it's probably better expressed as a requirement of the protocol, as a constraint on our index associated type.

Putting this constraint on the protocol means that all types that conform to the protocol need to supply an equatable type for their index.

That way you don't have to specify it every time you write the extension.

This is another example of negotiating the protocol contract.

Users of the protocol had a requirement that they really needed to be able to compare indexes.

And, conforming types, they did a check that they can reasonably accommodate that without giving up too much flexibility.

In this case, they definitely can.

Ints, the data, and array are using are already equatable.

And, with Swift 4.2's new automatic synthesis of equatable conformance, it's easy for conformance, it's easy for dictionary to make its index type equatable as well.

Next, let's talk about optimizing this count operation with a customization point.

So, we've written a version of count, that calculates the number of elements in the collection by walking over the entire collection.

But, obviously a lot of collections can probably do that a lot faster.

For example, supposing a dictionary kept internally a count of the number of elements it held, for its own purposes.

If it has this information, it can just serve it up in its own implementation of count.

That means that when people call count on a dictionary, they're getting fast constant time, instead of the linear time that our original version that works with any collection takes.

But, when adding optimizations like this, there's something you need to be aware of, which is the difference between fulfilling protocol requirements, and just adding lots of overloads onto specific types.

types.

Up until now, this new version of count on dictionary is just an overload.

That means that when you have a dictionary, and you know it's a dictionary, you'll get the newer, better version of count.

But, what about calling it inside a generic algorithm?

So, supposing we wanted, for example, to write a version of the standard library's map?

If you're not already familiar with it, it's a really useful operation that transforms each element in the collection, and gives it back to you as a new array.

The implementation's pretty simple.

It just creates a new array, moves over the collection, transforms each element, and then appends it to the array.

Now, as you append elements to an array like this, the array automatically grows.

And, as it grows it needs sometimes to reallocate its internal storage in order to make more room to accommodate the new elements.

In a loop like this, it might have to do that multiple times over, depending on how big it over, depending on how big it gets.

And, doing that takes time.

Allocating memory can be fairly expensive.

There's a nice optimization trick we can do with this implementation.

We already know exactly how big the final array it going to be.

It's going to be exactly the same size as our original collection.

So, we could reserve exactly the right amount of space in the array up front, before we start appending to it, which is a nice speed-up.

And, to do this, we're calling count.

But, we're calling count here, in what's referred to as a generic context.

That is, a context where the collection type is completely generic, not specific.

It could be an array, or a dictionary, or a [inaudible] list, or anything.

So, we can't know that it necessarily has a better implementation of count available to it, when the compiler compiles this code.

And so, in this case, the version of count that's going to be called is actually the general version of count, that works on any collection and works on any collection and iterates over the entire collection.

If you called map on a dictionary, it wouldn't call the better version of count that we've just written yet.

In order for customized method or property like this to be called in a generic context, it needs to be declared as a requirement on the protocol itself.

We've established that there's definitely a way in which certain collections could provide an optimized version of count, so it makes sense to add it as a requirement on the protocol.

Now, even though we've made it a requirement to implement it, all collections don't have to provide their own implementation, because we've already provided one via our extension that will work on any collection.

Adding a requirement to the protocol, and alongside it adding a default implementation via an extension is what we refer to as a customization point.

With a customization point, the With a customization point, the compiler can know that there's potentially a better implementation of a method or property available to it, and so, in a generic context, it dynamically dispatches to that implementation through the protocol.

So now, if you call map on a dictionary, even though it's a completely generic function, you will get the better implementation of count.

Adding customization points like this, alongside default implementations through extensions is a really powerful way of getting the same kind of benefit that you can also get with classes, implementation inheritance, and method overwriting.

But, this technique works on structs and enums, as well as classes.

Now, not every method can be optimized like this.

And, customization points have a small but non-zero impact on your binary size, your compiler runtime performance.

So, it only makes sense to add customization points when there's definitely an there's definitely an opportunity for customization.

For example, in the map operation that we just wrote, there's no reasonable way in which any different kind of collection could actually provide a better implementation.

And so, it doesn't make sense to add it as a customization point.

It can just stay as an extension.

So, we've created this collection type, and it's actually pretty fully-featured now.

It has lots of different conforming types possible, and various different useful algorithms you can write for it.

But, sometimes you need more than just a single protocol in order to categorize your family of types.

You need protocol inheritance.

And, to talk to you more about that, here's Doug.

[ Applause ]

Thank you, Ben.

So, protocol inheritance has been around since the beginning of Swift.

And, to think about where we need protocol inheritance, let's go look at this collection protocol that we've been building.

It's a nice protocol.

It's well-designed.

It describes a set of conforming types, and gives you the ability to write interesting generic algorithms on them.

But, we don't have to reach very far to find other collection-like algorithms that we cannot implement in terms of the collection protocol thus far.

For example, if we want to find the index of the last element in a collection, that matches some predicate, the best way to do that would be to start at the end, and walk backwards.

Collection protocol doesn't let us do that.

Or, say we want to build a shuffle operation to randomly shuffle around the elements in a collection.

Well, that requires mutation, and collection doesn't do that.

Now it's not that the collection protocol is wrong, but it's that we need something more to describe these additional generic algorithms, and that is the point of protocol inheritance.

So, here the bidirectionalCollection protocol inherits from, or is a collection.

What that means is that any type that conforms to the bidirectionalCollection protocol also conforms to collection, and you can use those collection you can use those collection algorithms.

But, bidirectionalCollection adds this additional requirement, of being able to step backwards in the collection.

An important thing to note is not every collection can actually implement this particular requirement.

Think of a singlyLinkedList, where you only have these pointers hopping from one location to the next.

There's no efficient way to walk backward through this sequence, so it cannot be a bidirectionalCollection.

So, once we've introduced inheritance, you've restricted the set of conforming types, but you've allowed yourself to implement more interesting algorithms.

So, here's the code behind this last index(where operation.

It's fairly simple.

We're just walking backwards through the collection, using this new requirement from the bidirectionalCollection protocol.

Let's look at a more interesting algorithm.

So, here's a shuffle operation.

So, it was introduced for collections in Swift 4.2.

You don't have to implement it yourself, but we're going to look at the algorithm itself to see what kinds of requirements it introduces to figure out how it introduces to figure out how to categorize those into protocols meaningfully.

So, the Fisher-Yates shuffle algorithm's a pretty old algorithm.

It's also fairly simple.

You start with an index to the first element in the collection.

And then, you select randomly some other element in the collection, and swap those two.

In the next iteration, you move the left index forward one, randomly select between there and the end, swap those elements.

And so, the algorithm is pretty simple.

It's just this linear march through the collection, randomly selecting another element to swap with.

And, at the end of this, you end up with a nicely shuffled collection.

So, we can actually look at the code here.

It's a little bit involved.

Don't worry about that.

And, we're going to implement it on some kind of collection.

So, we'll look at the core operations in here.

So, first we need to be able to grab a random number between where we are in the collection and the end of the collection, using this random facility.

But, that's an integer.

And, what we need is an index into the collection.

We know those are different.

We know those are different.

So, we need some operation let's call it index offsetBy to jump from the start index quickly over to whatever position we've selected.

The other operation we need is the ability to swap two elements.

Great. We have two operations that we need to add to the notion of a collection to be able to implement shuffle, therefore, we have a new shuffleCollection protocol.

Please don't do this.

So, this is an anti-pattern that we see.

And, the anti-pattern here is we had one algorithm.

We found its requirements, and then we packaged it up into a protocol that is just that one just describes that one algorithm.

If you do this, you have lots and lots and lots of protocols around that don't have any interesting meaning.

You're not learning anything from those protocols.

So, what you should do is notice that we actually have distinct capabilities here.

So, shuffle is using random access, and it's using mutation.

But, these are separate, and we can categorize them in separate can categorize them in separate protocols.

So, for example, the randomAccessCollection protocol is something where it allows us to jump around the collection, moving indices quickly.

And, there are types like unsafeBufferPointer that can give you random access.

But, do not allow any mutation.

That's a separate capability.

So, we also have the mutableCollection protocol here.

And, we can think of types here that allow mutation, but not random access, like the singlyLinkedList that we talked about earlier.

Now, you notice that we've essentially split the inheritance hierarchy here.

We've got the access side for random access, bidirectional, and so on.

And then, we've got this mutation side.

That's perfectly fine, because clients themselves can compose multiple protocols to implement whatever generic algorithm they're doing.

So, we go back to our shuffle algorithm.

And, it can be written as an extension on randomAccessCollection, with a self-type.

This is the type that conforms to randomAccessCollection also conforms to the mutableCollection protocol.

And now, we've pulled together the capabilities of both of the capabilities of both of these.

Now, when you have a bunch of conforming types, and a bunch of generic algorithms, you tend to get protocol hierarchies forming.

Now, these hierarchies, they shouldn't be too big.

They should not be too fine-grained.

Because you really want a small number of protocols that really describe the kinds of types that show up in the domain, right?

And now, there's things that you notice when you do build these protocol hierarchies.

So, as you go from the bottom of the hierarchy to the top, you're going to protocols that have fewer requirements, and therefore, there're more conforming types that can implement those requirements.

Now, on the other hand, as you're moving down the hierarchy, and combining different protocols from the hierarchy, you get to implement more intricate, more specialized algorithms that require more advanced capabilities, but naturally work with fewer conforming types.

OK.

So, let's talk about conditional conformance.

This is, of course, a newer feature in Swift.

And, let's start by looking at slices again.

So, for any collection that you have, you can form a slice of that collection by subscripting with a particular range of indices.

And, that slice is essentially a view into some part of the collection.

Now, these are default type that you get from slicing a collection, is called slice.

And, slice is a generic adaptor type.

So, it is parameterized on a base collection type, and it is itself a collection.

So, our expectation on a slice is that you can do anything to a slice that you can do to the underlying collection.

It's a reasonable thing to want.

And so, certainly we can go and use the forward search operations like index(where to go find something matching a predicate, and that works on the collection and any slice of that collection.

So, we'd like to do the same thing with backwards search, but here we're going to run into a problem.

So, even if the buffer is a bidirectionalCollection, nothing has said that the slice is a has said that the slice is a bidirectionalCollection.

We can fix that.

Let's extend slice to make it conform to the bidirectionalCollection protocol.

We need to implement this index before operation, which we can implement in terms of the underlying base collection.

Except the compiler's going to complain here.

The only thing we knew about that base collection is that it's a collection.

It doesn't have an index before operation on it.

We know how to fix this.

All we need to do is introduce a requirement into this extension to say that well, base needs to be a bidirectionalCollection.

This is conditional conformance.

All it is, is extensions that declare conformance to a protocol, and then the constraints under which that conformance actually makes sense.

And, the wonderful thing about conditional conformance, is it stacks nicely when you have these protocol hierarchies, so we can also state that slice is a randomAccessCollection, when its underlying base type is a randomAccessCollection.

Now, notice that I've written two different extensions here.

Now, it's generally good Swift style.

Write an extension, have it conform to one protocol, so you know what that extension is for, you know its meaning.

It's particularly important with conditional requirements conformances, because you have different requirements on these extensions.

And, this allows for composability.

Whatever the underlying base collection can do, the slice type can also do.

So, let's look at another application of conditional conformance, also in the standard library, and these are ranges.

So, ranges have been around forever in Swift.

And, you can form a range with, for example, these ..less than operations.

And so, you can form ranges of doubles, you can form ranges of integers.

But, some ranges are more powerful than others.

So, you can iterate over the elements in a range of integers.

Well, why can you do that?

It was because an intRange conforms to collection.

Now, if you're actually look at the type, it's produced by that the type, it's produced by that ..less than operator.

It is aptly named the range type.

Again, it's generic over the underlying bound type.

So, in this case, we have a range of doubles, and it merely stores the upper and lower bounds.

That's fairly simple.

But, prior to Swift 4.2, you would get from an integer range, an actually different type.

This is the countableRange type.

Now, notice it's structurally the same as the range type.

It has one type parameter.

It has lower and upperBound, but it adds a couple additional requirements onto that bound type.

That the bound be stridable, right?

Meaning you can walk through and enumerate all the elements.

Now, that's the ability you need so that you can make countableRange conform to randomAccessCollection.

That enables the .forEach iteration loop, and other things.

But, with conditional conformance, of course, we can do better.

So, let's turn the basic range type into a collection, when the bound type conforms this has these extra stridable these extra stridable requirements on it.

It's a simple application of conditional conformance, but it makes the range type more powerful when used with better type parameters.

Now, notice that I'm just conforming to randomAccessCollection.

I have not actually mentioned collection or bidirectionalCollection.

With unconditional conformances, this is OK.

Declaring conformance to randomAccessCollection implies conformances to any protocols that it inherits.

In this case, bidirectionalCollection and collection.

However, with conditional conformance, this is actually an error.

Now, if you think back to the slice example, we needed to have different constraints for those different levels of the hierarchy for collection versus bidirectionalCollection versus randomAccessCollection.

And so, compiler's enforcing that you've thought about this, and made sure that you have the right set of constraints for conditional conformance.

In this case, the constraints across the entire hierarchy are the same.

So, we can just write out So, we can just write out explicitly collection and bidirectionalCollection to assert that this is where all these conformances are, or we can do the stylistically better thing, and split out the different conformances.

Now, at this point, our range type is pretty powerful.

It does everything the countableRange does.

So, what should we do with countableRange?

We could throw it away.

In this case we're talking about the standard library, and there's a lot of code that actually uses countableRange, so we can keep it around as a generic type alias.

This is a really nice solution.

So, the generic type alias adds all of those extra requirements you need to make the range countable.

The requirements you need to turn it into a collection, but it's just an alternate name for the underlying range type.

Again, this is great for source compatibility, because code can still use countableRange.

On the other hand, it's also really nice to give a name to those ranges that have additional capabilities of being a randomAccessCollection.

In fact, we can use this to In fact, we can use this to clean up other code.

To say, well, we know what a countableRange is.

It's a range with this extra striding capability, so we can go extend countableRanges, and that is a case in which we have randomAccessCollection conformance.

So, we've introduced this in Swift 4.2 to help simplify the set of types that we're dealing with, and make the existing core types like range more composable and more flexible.

OK.

One last topic.

So, Swift is a multi-paradigm language.

We've been talking exclusively about generics right now.

But, of course, Swift also supports object-oriented programming.

And so, I'd like to take a few moments to talk about the interaction between those two features, how they work together in the Swift language.

So, with class inheritance, we know how class inheritance works.

It's fairly simple.

You can declare a superclass, like Vehicle.

You can declare some subclasses, like Taxi and PoliceCar.

They both inherit from Vehicle.

And, once you do this, you have this object-oriented hierarchy.

You have some expectations about where you can use those subclasses.

So, if I were to extend Vehicle with a new method, to go [inaudible] Drive, I fully expect that I can call that method on one of my subclasses, Taxi.

So, this is a fundamental aspect of object-oriented programming.

And, Barbara Liskov, she described this really well in a lecture back in the '80's.

Since then, we've referred to this as the Liskov substitution principle.

And, the idea's actually fairly simple.

So, if you have someplace in your program that refers to a supertype, or superclass, like Vehicle, you should be able to take an instance of any of its subtypes, or subclasses, like Taxi or PoliceCar, and use that instead.

And, the program should still continue to type check, and run correctly.

So, the substitution here is an instance of a subclass, should be able to go anywhere that the superclass was expected and superclass was expected and tested.

And, this is a really simple principle.

We've all internalized it, but it's also really powerful.

If you think about it.

And, at any point in your program think well, what happens if I get a different subclass, maybe a subclass I haven't thought about here.

So, getting back to generics, what are our expectations when applying Liskov substitution principle to the generic system?

Well, maybe we add a new protocol, Drivable.

Whatever. And, extend Vehicle to make it Drivable.

What do we expect to happen?

Well, we expect that you can use that protocol, conformance of Vehicle to Drivable, for some of its subclasses as well.

Say, you add simple generic algorithm to the Drivable protocol to go for a sundayDrive.

Well, now you should be able to use that API on a PoliceCar, even if that might not be the best idea.

So, the protocol conformance here is effectively being inherited by subclasses.

inherited by subclasses.

And, this puts a constraint on the conformance.

The one conformance that you write, the thing that makes Vehicle Drivable, has to work for all of the subclasses of Vehicle now and anyone that comes up with it later.

Most of the time, that just works.

However, there are some cases where this actually adds new requirements on the subclasses.

The most common one is when dealing with initializer requirements.

So, if you've looked at the decodable protocol, it has one interesting requirement, which is the initializer requirement to create a new instance of the conforming type from a decoder.

How do we use this?

Well, let's go add a convenience method to the decodable protocol.

It's a static method decode that creates a new instance from a decoder, essential a wrapper for the initializer, making it easier to use.

And, there's two interesting things to notice about this particular method.

First, is it returns Self with a capital S.

Remember this is the conforming Remember this is the conforming type.

It's the same type that you're calling the static method on.

Now, the second interesting thing is, how are we implementing this?

Well, we're calling to that initializer above to create a brand new instance of whatever decodable type we have, and then return it.

Fair enough.

We can go ahead and make our Vehicle type Decodable.

And then, what we expect, when applying the Liskov substitution principle, is we can use any subclass of Vehicle with these new API's that we've built through the protocol conformance.

So, we can call Decode on Taxi, and what we get back is not a Vehicle not [inaudible] Vehicle instance, but the Taxi, an instance of Taxi.

This is great, but how does it work?

So, let's take a look at what Taxi might have.

Maybe there's an hourly rate here, and when we call Taxi.decode from, we're going through the protocol, going through the protocol initializer through the protocol initializer requirement, there's only one initializer this can actually call, and that's the initializer that's declared inside the Vehicle class, in the superclass here.

So, that initializer, it knows how to decode all of the state of a Vehicle.

But, it knows nothing about the Taxi subclass.

And so, if we were to use this initializer directly, we would actually have a problem that the hourlyRate would be completely uninitialized, which could lead to some rather unfortunate misunderstandings when you get your bill at the end.

So, how do we address this?

Well, it turns out that Swift doesn't let you get into this problem.

It's going to diagnose at the point where you try to make Vehicle conform to the decodable protocol that there's actually a problem with this initializer.

It needs to be marked required.

Now, a required initializer has to be implemented in all subclasses.

Not just the direct subclasses, but any subclasses of those, any future subclasses you don't know future subclasses you don't know about now.

Now, by adding that requirement, it means that when Taxi inherits from Vehicle, it also needs to introduce an initializer with the same name.

Now, this is important because this initializer's responsible for decoding the hourlyRate, and then chaining up to the superclass initializer to decode the rest of the Vehicle type.

OK.

Now, if you're reading those red boxes really quickly, you may have noticed the subphrase non-final.

So, by definition, final classes have no subclasses.

So, it essentially exempts them from being substituted later on.

That means that there's no sense in having a required initializer because you know there are no subclasses.

And so, final classes, they're in a sense a little easier to work with when dealing with things like decodable or other initializer requirements, because they're exempt from these rules of having required initializers.

So, when you're using classes, So, when you're using classes, for reference semantics, consider using final when you no longer need to customize your class through the inheritance mechanism.

Now, this doesn't mean that you can't customize your class later.

You can still write an extension on it.

The same way you can extend a struct or an enum.

You can also add conformances to it, to get more dynamic dispatch.

But, final can simplify the interaction with the generic system, and also unlock optimization opportunities for the compiler in runtime.

So, we've talked a bit about Swift generics today.

The idea behind Swift generics is to provide the ability to reuse code while maintaining static type information, to make it easier to write correct programs, and compile those down, into efficiently executing programs.

When you're designing protocols, let this push and pull between the generic algorithms you want to write against a protocol, and the conforming types that need to implement that protocol guide your design to meaningful extractions.

Introduce protocol inheritance when you need some more specialized capabilities to implement new generic algorithms that are only supportable on a subset of the conforming types.

And, conditional conformance when you're writing generic types, so that they can compose nicely, especially when working with protocol hierarchies.

And finally, when you're reasoning about the tricky interaction between class inheritance and the generic system, go back to the Liskov substitution principle, and think about what happens here if I introduce a subclass rather than a superclass at which I wrote the conformance.

Well, thank you very much.

There's a couple of related sessions on embracing algorithms and understanding how they can help you build better code, as well as using Swift collections effectively in your everyday programming.

Thank you.

[ Applause ]

Apple, Inc. AAPL
1 Infinite Loop Cupertino CA 95014 US