What’s New in MapKit and MapKit JS 

Session 236 WWDC 2019

MapKit and MapKit JS bring fully featured Apple Maps to your app and website. See how the latest features give you more control over the base map presentation, finer-grained search and result filtering of points of interest and address information, and integration with standard data formats for custom overlays and annotations.

[ Music ]

[ Applause ]

Hi, everyone.

Thank you for coming to this session.

I know it’s been a long day.

My name is Alexander Jakobsen and I’m an engineer on the MapKit team.

Apple Maps helps millions of people around the world to navigate and explore the world every single day.

And thousands of developers like you are using MapKit to integrate Apple Maps in your apps where you help your users with all kinds of amazing location services.

And last year, we introduced MapKit JS allowing you to bring Apple Maps to your websites as well.

Hopefully you watched the keynote this Monday and saw all the great new features we’re adding to Maps in iOS 13.

We’re very excited about where Maps is going.

But on top of everything you saw in the keynote, we have been very busy this year building a number of new features in MapKit and MapKit JS that you have been asking for.

So, I’m very excited to finally be able to tell you what’s new in MapKit and MapKit JS.

But before we get into all the new features, I want to take a moment and talk about our brand-new base map.

As mentioned in the keynote, we have rebuilt our map from the ground up covering more than 4 million miles of road so far in our fleet of custom cars and planes.

The new map is incredibly detailed and you’ll see more features than ever before like baseball fields, running tracks, walking paths and swimming pools.

And you will see major improvements to the richness and details in parks, greenways, beaches and rural areas, as well as improved roads, buildings, city parks and so much more.

And we’ve also improved the address detail a lot, which means more accurate search and direction results.

And the best part is as we roll out these improvements to Apple Maps, they become automatically available to you through MapKit and MapKit JS.

The updated map will be available in the entire US at the end of 2019 and we’re adding additional countries in 2020.

We’re also making the map available to you in Dark Mode on iOS, tvOS, macOS and the web.

And in this session, we will be using a fictitious WWDC companion app to demonstrate some of the topics we’re going to cover.

This app will be available as sample code from the session web page, but let’s take a quick look at what this app is all about.

There are three main features intended to bring additional value to conference attendees like yourself.

The first feature is the accommodation feature or accommodation finder where attendees can find accommodation for the duration of the conference through the companion app partners.

This view is simply a map with a number of annotations added to it.

The second feature is called After Hours.

And it’s a feature where attendees can find a restaurant or a bar where they’re meeting up with other attendees after conference hours.

As the user types in the search field, relevant suggestions on restaurants and bars are shown us suggestions and search results are displayed as annotation views on the map.

And the last feature is the event view.

The event view shows a simple map of the companion app event that takes place the evening before the official WWDC Bash.

This feature focuses the map view on the location of the event and renders a number of annotations and overlays to represent the food and drinks tents, as well as the stage.

So, for the remainder of this session, we’ll go through a little bit more details on the most exciting new features in MapKit and MapKit JS.

And we have a lot of things to cover today like our new snapshot service, new APIs for filtering and increased map view camera control.

So, let’s start out with talking about snapshots.

Snapshots are simply static images of the map and you may be familiar with the map snapshotter that has been available in MapKit for a number of years.

We use snapshots in our apps but we don’t need user interaction in our maps, like in contacts, and messages, and calendar.

But since the snapshotter is part of MapKit, you can only create the snapshots in native apps.

But this week, we announced a new service that lets you create and use snapshots in other environments as well.

And this new service is called Maps Web Snapshots.

To fetch a Maps Web Snapshot, all you need is a URL.

And the parameters of that URL dictates the characteristics of the image such as its center coordinate, its size.

And if you want this same snapshots but with the map in Dark Mode, you can add the color scheme parameter with the value dark.

All snapshot URLs requires a signature which you can generate after obtaining a MapKit JS API key through the apple developer program.

As part of the MapKit JS beta, you can now request 25,000 snapshots per day, which we’re hoping it will be more than enough to cover the needs of people out there.

You can use map snapshots any place where you would use a URL to display an image like in an email, in a URL preview and of course web pages.

And to help you get started with generating snapshot URLS, we have built this tool for you and you can find that one and more information about the MapKit JS beta program on the MapKit JS developer page.

So, that’s how you can create Maps Web Snapshots.

So, let’s move on and talk a little bit more about the map in Dark Mode.

Last year, we introduced the map in Dark Mode for on macOS.

And this year, we’re bringing it to iOS, tvOS and the web.

So, what do you need to do to use the map in Dark Mode in your apps?

Well, MKMapView will automatically adapt to the user interface style in the trait collection applied to its view hierarchy.

So, if your view is dark, the map view will adapt automatically.

And this should be familiar to you if you have used MapKit on tvOS in the past.

And for those of you who are wondering, yes, this improved map will replace the old one automatically.

As I mentioned before, if you don’t need user interaction in your maps, the right tool to use is the snapshotter.

But unlike the map view, the snapshotter is not aware of the view hierarchy in your apps.

It’s therefore important that you configure your snapshotter to ensure that the snapshot matches the user interface style of your view hierarchy.

And you do this using the snapshotter options.

So, let’s look an example of how to do this.

First, you need to create your snapshotter options and you need to configure what area of the world you want to snapshot.

You also need to provide the size of the snapshots.

Similar to a view, you configure the appearance of your snapshot using a user interface style in a trait collection.

So, if you have a view where you intend to present your snapshot, the easiest way to configure your options is to grab the trait collection of that target view.

This will ensure that the snapshot you create actually matches a user interface style of your view hierarchy.

But remember that users can switch in or out of iOS’s Dark Mode as they’re using your app.

So, make sure that you observe changes to the trait collection in your view hierarchy so you can regenerate your snapshot if the user interface style changes.

In some cases, you may not actually have a target view for your snapshot, maybe because you’re sharing it to another device.

In that case, you can instead just create a UITraitCollection with the appropriate user interface style.

And once you have configured your options, you simply path into your snapshotter at creation time and then you tell your snapshotter to create that snapshot for you.

And that’s how easy it is to use the map in Dark Mode in your apps.

So, the next thing I want to talk about is a set of new APIs in MapKit and MapKit JS that lets you take control over the point of interest icons that are shown in your map views.

If you’ve ever looked at our maps, you’ve probably seen that apart from all the roads, buildings, parks and water, there’s all these little icons representing restaurants, museums, parking and so on.

And this information is great for users to better understand what the area they’re looking at is all about.

But if you’re building a feature where you’re promoting hotels and hostels for example, you probably have your own data that you will be adding to the map view as annotation views.

And in that case, you may not want these built-in icons to show either because they’re duplicating the information you’re adding or because you don’t want to show businesses that are not connected to your service.

In the past, your only option has been to turn off all the point of interest icons but that means that your users are losing out on a lot of valuable context.

So what you want to do instead is to filter those point of interest icons based on their categories.

So, MapKit and MapKit JS now expose a list of categories that we think you will find useful when you’re building your apps [applause].

And this is obviously not an all-inclusive list.

So if you see one that is missing, come by our lab tomorrow or send us your feedback and most importantly, tell us about your use case.

So, you use these categories to create a pointOfInterestFilter.

And when you create your filter, you can configure it to either include categories or exclude categories.

So let’s take a look at a few examples of this.

By default, the map view does not apply any filter at all.

So, all the point of interest icons are showing.

But in our case, we wanted hotels and hostel and the like to not actually show.

And you can achieve this by creating an exclusion filter with the category hotel.

This ensures that no hotels or hostels are shown but the rest of the context is preserved.

If you instead know which categories are relevant in your use case, you can create an inclusion filter with those categories.

So, this filter for example will filter out the courthouse, and the beauty and barber shop around the corner and only show point of interest from the selected categories, restaurants, night life, and parking and cafés.

And if you do want to turn off all the point of interest icons, you can still do that using an excluding all filter.

Point of interest filtering support is also coming to MapKit JS this fall, and it works in a very similar way.

We’re adding a pointOfInterestFilter to the MapKit object and you create a filter using a list of point of interest category values.

Once you’ve created your filter, you apply it to the pointOfInterestFilter property on your map.

So, the map view is definitely MapKit and MapKit JS’s most prominent feature.

But another important cornerstone is the support for search and autocompletion.

So, the next thing I’m going to talk about is how you can improve the results from search and autocompletion using filtering.

The companion app uses search and autocompletion support from MapKit for its After Hours feature.

So, as the user is typing, the autosuggest autocompletion suggestions are coming from MKLocalSearchCompleter.

And the results shown on the map are fetched using MKLocalSearch.

But if we take a step back and look at those suggestions again, we realize that in a feature where we expect our users to search for bars and restaurants, a middle school is not a very relevant suggestion.

Fortunately, the pointOfInterestFilter we used to filter our map view before also works both for search and autocompletion.

So, you can create your filter for your search feature and apply it to your MKLocalSearhCompleter and apply it to your MKLocalSearch.Request.

And this will narrow down your results to a much more relevant set.

But there’s still addresses showing in this list.

And that’s because addresses are distinctly different from point of interest, which generally represents landmarks or businesses.

So, to further improve the results in this list, we want to focus the search and autocompletion results to just point of interest.

Up until now, MapKit has only supported result type filtering for MKLocalSearchCompleter.

And you did this using the filterType property.

But the value locations only still meant that you will get both addresses and point of interest, which is not enough to help us here.

So to address this, we have introduced two new option sets called ResultType.

And you use these option sets to configure the type of results you want from both search and autocompletion.

So the option set for MKLocalSearchCompleter that you pick any combination you want from addresses, points of interests and queries.

And for MKLocalSearch.Request you can pick and choose between addresses and point of interests.

So you can now easily just configure your completer and your request to only request point of interests.

And the result you get back from MKLocalSearch is an array of MKMapItems.

And MapItems contain a lot of useful information like the location coordinate of that result, and address.

And in some cases, you can also have a name for that place, a phone number, or even a URL.

To make it easier for you to reason about the type of results you’re getting, we have added a new property called point of interest category.

So if you make a search for ABC, if you make a search for ABC, you might get the results Al’s Beet Canteen and ABC Brewing, for example.

And if you inspect the point of interest categories of these results, you will see that they are a restaurant and a brewery respectively.

If you would apply an inclusion filter with the category brewery, Al’s Beet Canteen will no longer show up.

However, ABC Brewing also runs a restaurant as part of their business, which means if we apply an inclusion filter with the category restaurant, we may still see ABC Brewing in our search results.

And if we inspect the point of interest category, it will still be brewery because that’s ABC breweries ABC Brewing’s primary category.

The same search and autocompletion filtering support is coming to MapKit JS this fall.

And you filter the point of interests by applying your pointOfInterest filter directly to the search object.

And to narrow down the type of results you’re getting, you use the new properties includeAddresses, includePointOfInterest, and includeQueries also found on the search object.

All right.

And with that I will hand it over to my colleague Nalini, who will show you how quick and easy it is to improve the relevance of your search results using these new APIs.

[ Applause ]

Thank you, Alexander.

Hi. I’m Nalini and I’m a software engineer in the MapKit Framework team.

Alexander already showed the WWDC companion app we’ll be building throughout the session.

We already have a version of the application that implement some of the functionality but isn’t quite there yet.

Let me show it to you.

Here, we are looking at the After Hours feature where we’ll be exploring places around San Jose.

We have a map view set up with a search bar.

The search bar uses MKLocalSearchCompleter and MKLocalSearch to search Apple Maps later.

This functionality has existed since iOS 9.3.

Let’s go ahead and search for something.

As I’m typing we receive autocomplete suggestions.

As an attendee at the conference, we are not interested in some of these results.

Let’s go ahead and issue a search.

We observed a similar experience.

We get back results that are not relevant for our use case.

Let’s see how to improve this experience using pointOfInterestFilter and result types APIs.

Here we are looking at the After Hours view controller where autocompletion and search are set up.

We’ll declare a pointOfInterestFilter that we will leverage for both autocompletion and search.

The categories we are interested in are night life and restaurant for our use case.

We’ll apply the pointOfInterestFilter to the searchCompleter.

Address results are not relevant for our use case.

We limit the result types to pointOfInterest.

The changes we just did affect autocompletion.

Let’s see how to apply the same filter to search.

When user issues a search, we form a local search request.

We’ll apply the pointOfInterestFilter to the search request.

We limit the result types to pointOfInterest.

Now that we have pointOfInterestFilter and result types applied to both autocompletion and search, let’s execute our application.

Looking at the After Hours feature, we’ll go ahead and search for the same source string.

As you can see, we get results relevant to our use case.

And when we issue a search, we also get relevant results which are displayed as annotations on the map view.

We just saw how to improve the autocompletion and search experience using pointOfInterestFilter and result types APIs.

With that, I hand over back to Alexander.

[ Applause ]

OK. So that was five lines of code to get rid of all those irrelevant results.

So if you’re using search for autocompletion in your apps today, I strongly recommend you to try out these new APIs.

And that’s all we’re going to cover for search and autocompletion filtering today.

So next I want to tell you about some really nice improvements we have made to our overlay APIs in MapKit.

Overlays are used to layer custom content over a wider area of your map view and they generally represent geometric shapes like lines or polygons.

And to show you these improvements we’re going to take a quick look at the event view of the companion app.

So for this feature we want to render this simple event map in our map view.

And we’ll use overlays to represent the food and drinks tents and the stage.

As these they’re all rectangular in shape we can model them using MKPolygons.

And so this map is so simple and we’re kind of short on time in this session, we will style them all the same way.

But for every overlay that you add to your map view, you need to provide a render object in your delegate method.

If you’re adding a lot of similar styled overlays like this, that means you will be creating a lot of render objects that are configured exactly the same way, and that’s kind of wasteful.

This will obviously not lead to any noticeable performance impact in an app with seven overlays.

But if you are like adding a lot of a large number of overlays, you can actually notice the performance impact of this.

So to address this, we have introduced a few new classes in MapKit.

There are two new overlay classes, MKMultiPolygon and MKMultiPolyline that you can use to group polygons and polylines respectively.

And as I said before, every time you add an overlay you need to provide a render object.

So we have introduced two new matching render classes, MKMultiPolygonRenderer and MKMultiPolylineRenderer.

Using these classes, you can dramatically reduce the number of render objects need that you need to create in your apps and thereby improve the performance.

So let’s look at a simple example of this.

So here we’re creating the polygon for the stage using the coordinates above.

And after creating all the other polygons, we’re adding them straight to the map view.

But this means, like I said before, the delegate will be asked to create seven renderers.

So what you instead want to do is to take all those polygons and group them into an MKMultiPolygon and then simply add that MultiPolygon to your map view.

Once you have updated your code to add MultiPolygons instead, you also need to update your delegate method to expect MultiPolygons.

And when you get one you need to create a MKMultiPolygonRenderer object.

And this is done the exact the same way as the old MKPolygonRenderer.

Apart from saving memory by creating a fewer number of render objects, this also improves the rendering performance because MapKit can now be smarter and batch up the rendering of those polygons for you.

Another improvement we have made is that MapKit will now automatically render all your overlays that you create using the built-in MapKit render objects as vector graphics instead of bitmaps.

And vector rendering greatly improves the look of your overlays when the user interacts with the map because they scale much more nicely when the user zooms in and out in the map view.

If you’re adding very large and complex overlays and they’re for some reason not looking quite right when rendered as vector graphics, you can opt out of vector rendering using the shouldRasterize property on the renderer.

And that’s how you use the new overlay APIs in MapKit.

[ Applause ]

So the next thing I want to cover is MapKit’s new support for GeoJSON.

GeoJSON is a widely used storage and wire format for representing geometric objects like points, lines and polygons.

And a lot of vendors publish their data as GeoJSON.

So some of you may already have written code to parse that GeoJSON, create annotations and overlays that you’re adding to your map view.

So with these new APIs, we’re hoping that working with GeoJSON will be easier than ever and maybe you can even delete some of that code you have written.

[ Applause ]

For those of you who are not familiar with GeoJSON, here’s a very simple example of how it can represent a location.

At the top level there’s the type member that lets us know that this is a feature and a feature can have an optional identifier which you can use to uniquely identify this feature among others.

And this feature is defined by a single geometry, a point, but a feature can also have multiple geometries.

And in addition to the geometry there’s the properties member which in this case just contains the name of this location, the stage.

So to represent this feature in MapKits, we have introduced a new class called MKGeoJSONFeature.

And this class is simply a data container that holds the identifier, the decoded geometry and the properties.

So to get from your GeoJSON to actual MapKit classes, we have introduced another class called MKGeoJSONDecoder.

If you’ve ever used Swift’s JSON decoder, MKGeoJSONDecoder should feel pretty familiar to you.

You simply create your decoder, you passage your data and it will return an array of either MKGeoJSONFeatures or MapKit’s geometry such as MKPolygon or MKPolylines and so on.

And this depends on how your GeoJSON is structured because you can either have features in your top level or geometry in your top level GeoJSON.

So if you take a look at how this example GeoJSON from before would be decoded by the MKGeoJSONDecoder, we can realize that since this one has a single feature, our resulting array will just have one item.

But the decoder also decodes the geometry into MapKit classes.

So the point location here will be decoded as an MKPointAnnotation and the feature itself will be decoded as an MKGeoJSONFeature with a reference to that MKPointAnnotation.

So with the introduction of MKMultiPolygon and MultiPolyline that I’ve talked about before, MapKit now has a complete mapping from GeoJSON geometry into MapKit classes.

And this means that once you have decoded your GeoJSON, you will have annotations and overlays that are more or less ready to be added to your map views.

So let’s look at a simple example of this.

First, you create your decoder and you passage your data.

And as I said, depending on how your GeoJSON is structured, you can then either work with your top level features or your geometries.

If you’re doing additional parsing of a feature, you use the geometry property to get access to those polygons and polylines.

In our example, we knew that the GeoJSON would only have a single feature and a point geometry.

So this code is making some assumptions.

In most cases, you would want this part of your code to also take MKPolygons, MultiPolylines and so on into consideration.

In the GeoJSON specification, the properties member can be any valid JSON or even null.

And this means that MapKit cannot really make any assumptions about how to parse this data.

So MKGeoJSONFeature exposes its properties as the data type.

So if you know the structure of your properties, you can use the JSON decoder to map up data into an appropriate model class.

In our case we know that the properties contained a string mapping to a string.

So we will simply map it to a dictionary, which makes it really for us to read out the value for the name key.

But in some cases, you may not be in control over your GeoJSONs.

You may not actually know the structure of your properties.

In that case, you can instead use the JSON serialization API to dynamically explore the properties depending on its type.

So what about MapKit JS?

We actually already support GeoJSON in MapKit JS.

So you can map your GeoJSON into an existing MapKit JS item such as annotations, overlays and ItemCollections.

And let’s have a look at a quick example of that.

So you simply pass either a URL or a valid GeoJSON object to the import GeoJSON function on the MapKit object.

This function will return an item collection that contains one or many MapKit JS items which you can then easily add to your map object using either add items or show items.

All right, and I will now hand it back over to Nalini one more time.

And now she’s going to show us how you can get that event map rendered in the map view.

[ Applause ]

Let’s continue building the WWDC Companion app.

Here we are looking at the Concert in The Park feature where we have a map you set up for the concert area.

What concert would be complete without food and drinks?

We need tents for these food and drinks.

We’ll be rendering these tents as overlays and annotations on the map view.

Our data is provided to us in GeoJSON format.

Let’s have a quick look at the data.

In our JSON we have two features.

The top level is grouped in feature collection.

Our first feature is event tents.

We’ll be rendering many tents, which means more options for food and drinks.

The geometry of our tent is MultiPolygon.

These are the different coordinates of the tents.

We’ll be using this data to render overlays on the map view which will represent the tent outlines.

Our second feature is point, which describes the names of the various food and drink tents.

We’ll be using this data to render annotations on the map view, which will represent the tent labels.

Let’s see how to parse these two features in code.

Here, we are looking at the event data source where we will load the JSON and subsequently parse it.

Let’s go ahead and load the JSON.

Once the JSON is loaded, we will use the MKGeoJSONDecoder to decode this into an array of MKGeoJSON objections.

We’ll subsequently parse these objects.

Let’s go ahead and implement the parse function.

We’ll iterate through the GeoJSON objects.

We saw in our JSON that the top level is feature, so we will treat this object as MKGeoJSONFeature.

In a generic parser, we would want to handle other geometry objects as well.

We’ll iterate through the feature.geometry and we’ll filter this into native MapKit types, which is MKMultiPolygon.

We will add this to an array of overlays.

This is the array we will leverage to add the overlays to the map view.

So let’s go to the event view controller where the map view is set up and add the overlays.

For every overlay that we add to the map view, we need to provide a renderer in the map view’s delegate method.

Let’s go ahead and set up the delegate method.

We are past the MKOverlay in the delegate method and we are expected to return our MKOverlayRenderer.

So let’s go ahead and set up the renderer.

We saw that our geometry of our tent is MultiPolygon.

So we’ll treat this overlay as MKMultiPolygon.

Here we specify the visual representation of the polygon overlay and we specify how we want our tents to be rendered.

So now that we have the overlays added and the visual representation specified for the same, let’s go ahead and execute our application.

Looking at the Concert in the Park, and there you go.

We now see the tent outlines rendered as overlays on the map view.

It would be helpful to display the tent labels so we know where we can get the food and the drinks from.

We’ll be displaying these labels as annotations on the map view.

So let’s go back to the event data source.

The way the map view handles annotations is different than overlays.

So we’ll separate the annotations from the overlays.

The names of the annotations is parsed in feature.properties.

This is the data we will use to configure the annotation.

So let’s go ahead and implement the configure function.

The properties of our points is a dictionary mapping of string to string.

We will use Swift’s JSONDecoder to decode this into a dictionary of strings.

We’ll add this configured data to an array of annotations.

This is the array we will leverage to add the annotations to the map view.

So let’s go to the event view controller and add the annotations to the map view.

Let’s set up the view for the annotation.

We will use the MKMapViewDefaultAnnotation ViewReuseIdentifier constant to register our custom annotation.

Now that we have the model and the view set up for our annotations, let’s execute our application.

So we now see the tent labels which are rendered as annotations on the map view.

We just saw how to render annotations and overlays from GeoJSON data.

However, in our map view, we’ve seen numerous points of interest which are displayed which are interfering with our map.

Let’s leverage the exclude all point of interest filter that Alexander was talking about to turn these off.

Now that we have the pointOfInterestFilter applied, let’s execute our application.

And we just saw how easy it is to enable users to focus on information relevant to our use case.

With that, I hand over back to Alexander.

[ Applause ]

As you saw in this demo, the new MKGeoJSONDecoder can make it a lot easier for you when working with GeoJSON.

So it was pretty straightforward to get this event map rendered in the map view.

But in some use cases like when you’re dealing with indoor data for large venues, your data is much more complex.

So, to standardize and simplify work with this kind of complex indoor data, we have developed the indoor mapping data format, IMDF.

The IMDF specification is built on top of GeoJSON that we just talked about and it provides a comprehensive model for indoor data that lets you deal with any conforming JSON in a generalized way.

And to learn more about how to render rich indoor data in your maps, I strongly recommend watching this session, Adding Indoor Maps to your App and Website, which is tomorrow at 2 p.m. And that session will cover the IMDF specification in much more detail.

And that’s all I’m going to talk about the new GeoJSON support in MapKit.

So the last big topic I want to cover is a set of new APIs in MapKit and MapKit JS that lets you really take control over the map view’s camera.

And to show you this, we’ll take another look at the event view in the companion app.

So this view is intended to show this event map.

There’s very little reason for users to pan away to San Francisco, for example.

So, to focus the map view on the region that matters, you can add a boundary that limits the area where the user can pan.

So we introduced a new class called Camera Boundary that defines a region within which the center point of your map view always needs to remain.

There are two ways to create a camera boundary, either using a coordinate region or a MapRect.

Now once you’ve created your camera boundary, you apply it to the map view’s new cameraBoundary property.

But before you do, make sure that your map view is centered over a location inside that camera boundary.

You know your apps better than anyone else and if your map view is located outside of the camera boundary that you’re applying, the map view will update to a location inside the boundary.

And in most cases this will not be the location you would have chosen for your users.

And once the cameraBoundary has been applied, the map view will strictly enforce it.

This means that if you call setRegion for example with a location that would force the map view to move outside of the camera boundary it will instead move as close as it can but it will not violate its camera boundary.

And this is of course not just true for setRegion but any API that modifies the center point of your map views.

And we’ve also added support for camera boundaries in MapKit JS and there’s a slight difference in how it works.

You use either a CoordinateRegion or a MapRect as your camera boundary.

So you simply apply either your CoordinateRegion or your MapRect to the new cameraBoundary property on your map object.

So we now have a mechanism for keeping our map views in the region that matters.

But users can still zoom out to the point where that region is no longer visible.

And that’s possible because the camera boundary only ensures that the center point of the map is inside that region.

And we’re still centered over that park.

We’re just really, really zoomed out.

So we also need a way to limit the zoom in the map views.

And the map view zoom is controlled by its camera.

And if you use the MKMapCamera in your apps in the past, you’ve probably worked with the altitude API.

However, if you work with pitch cameras, it’s much more intuitive to think about the distance from the center coordinate of your map up to the camera.

And this is also the distance that controls the zoom in your map views.

So for this reason we have introduced a new property on MKMapCamera called Center Coordinate Distance.

And I would like to encourage you to move away from thinking about altitude and instead thinking about the distance from the center coordinate up to the camera.

And when you make this transition in your code, there’s one important thing to keep in mind.

In an altitude centric model like we’ve used in the past, a change to the pitch of the camera has meant that the camera moves farther away from the center point while preserving its altitude.

But in a distance centric model, when you change the pitch of your camera, the expectation is instead that the distance will be preserved and as a result the altitude will change.

MKMapCamera has now been updated so the behavior of the pitch property will change if you set a Center Coordinate Distance.

So you now know that the Center Coordinate Distance is what controls the zoom of your map view.

But how do you actually constrain it?

For this purpose we’ve introduced another new class called CameraZoomRange.

And the CameraZoomRange defines a minimum and a maximum Center Coordinate Distance.

And when you apply the CameraZoomRange to your map view, the camera will be forced to stay in that range.

So you simply create your CameraZoomRange with a minimum and a maximum distance and then you apply it to the new CameraZoomRange property on your map view.

And if you want, you can also create a CameraZoomRange with only a minimum or only a maximum distance.

As you’ve probably noticed in the past, even without the CameraZoomRange there are restrictions on how far in and out you can zoom in a map view.

And these restrictions may change depending on what region of the world you’re viewing or what map type you’re using.

In some cases there are technical limitations for these restrictions but in some cases there are legal reasons.

What this means is even if you apply a CameraZoomRange with a minimum distance that is shorter than the default of the MapView, your users will not be able to zoom in any farther.

And similarly a larger max distance will not allow your users to zoom out any farther.

And if your camera is outside of the CameraZoomRange you’re applying, the MapView will update the Center Coordinate Distance of your camera.

So just like with the cameraBoundary, the map view will strictly enforce the CameraZoomRange once you’re adding it.

And we’ve also added support for camera zoom ranges in MapKit JS.

So there’s a new CameraZoomRange object added to the MapKit object.

And in the same way you can create it with a minimum and a maximum or just one of the two.

You then apply it to the new CameraZoomRange property on your map object.

All right, and I will now hand it back one more time to my colleague Nalini.

And this time she will show you how to use these camera zoom ranges and camera boundaries to improve the experience in your map views.

[ Applause ]

Here we are looking at the concept map where we just rendered annotations and overlays.

I’m zooming out of the concept area and users can zoom out to San Jose and all of California.

We want to control the zoom range of the map view.

Let’s leverage the CameraZoomRange API that Alexander was talking about.

The min and the max are specified in meters.

By applying a CameraZoomRange to the map view, we restrict how far in and out users can zoom in the map view.

Let’s execute our application with the CameraZoomRange applied.

I’m trying to zoom out again.

OK, I cannot zoom out.

Let’s try to zoom in.

I’m zooming in.

I can get to the stage.

I can get to the different tents in the concert area.

So this zoom range is perfect for our use case.

However, as you can see, users can pan away.

We want users to focus on the concert area.

Let’s leverage the cameraBoundary API that Alexander was talking about.

We set up our MKCoordinateRegion with the eventCenter as its center point and a span of latitudinalMeters: 20 and longitudinalMeters: 10.

We use this coordinateRegion to apply the CameraBoundary.

By applying a CameraBoundary to the map view, we ensure the center point of the map view lies within this region.

Let’s execute our application with the CameraBoundary applied.

I’m going to try to pan away.

As you can see I cannot pan away.

Let’s try zooming in.

I really want to get to that fancy stage area.

Rumor has it Lady Gaga may be performing but it seems I cannot get to it.

Let’s try with some updated numbers.

Let’s try with latitudinalMeters: 100 and longitudinalMeters: 80 and let’s execute our application with these updated numbers.

OK, I’m going to zoom in again and try to get to the stage area.

And as you can see I can get to the stage area, I can get to the restrooms.

We just saw how to leverage the cameraBoundary and the CameraZoomRange APIs and ensure that users can focus on information relevant to our use case.

With that, I hand it back to Alexander.

[ Applause ]

So, once again, just a few lines of code change and you get a completely different map view experience.

So, try this out in your apps.

And that was actually the last thing we wanted to cover for the map view in camera APIs.

But like I said before we have covered quite a lot of stuff today.

So before we wrap up I want to summarize the key points we have talked about today that I hope you will bring with you from this session.

So with the introduction of the new snapshot service you can now create snapshots and use them on the web as well.

So if you don’t need user interaction in your maps don’t waste resources loading a full blown map.

And every app is unique and different data is relevant in different context.

So tailor your map views to your needs in your apps using the new pointOfInterestFilters.

And make sure that your search and autocompletion results are as relevant as possible for your users using the new pointOfInterestFilters and the result type filters.

And if you’re adding a lot of polygons and polylines in your map views, group them using the new multipolygons and multipolylines.

And if you’re working with GeoJSON, take advantage of the new support in MapKit and in MapKit JS, that way you will need to write less code and maintain less code.

And if your map view is really all about a fenced off area, try out the new camera boundaries and camera zoom ranges to really focus your map views on the region that matters.

And for more information and sample code for both MapKits and MapKit JS, visit this session’s web page.

And if you have any questions, come by our lab tomorrow at 3 p.m. And with that, I will want to thank you for coming and I hope you will have a very nice evening and enjoy the last day of Dub Dub tomorrow.

[ Applause ]

Apple, Inc. AAPL
1 Infinite Loop Cupertino CA 95014 US