Harnessing Metadata in Audiovisual Media 

Session 505 WWDC 2014

Rich metadata, such as time, date, location, and user defined tags, are particularly useful for time-based media. Discover how to harness static and timed metadata in your AV Foundation apps. See how to write metadata into media and read it during playback. Gain knowledge of best practices for privacy and protecting user data.

[ Silence ] [ Silence ]

Hello, good afternoon, welcome to session 505, [ Silence ]

“Harnessing Metadata in Audiovisual Media”.

[ Silence ]

I’ve heard we are competing with the “Intro to Swift” talk [ Silence ]

so get intimate with your neighbors here.

[ Silence ]

My name is Adam Sonnanstine.

[ Silence ]

I’m an engineer on the AVFoundation team [ Silence ]

and today we are going to talk about, of course, metadata.

[ Silence ]

So what do I mean by metadata?

[ Silence ]

Well, for the purposes of this talk we’re going [ Silence ]

to define metadata to mean any data that is stored [ Silence ]

in movie files, streaming presentations, any other sort [ Silence ]

of audiovisual presentation that describes the primary data, [ Silence ]

like the audio and video that we think about when we think [ Silence ]

of those sorts of presentations.

[ Silence ]

Some examples are always helpful.

[ Silence ]

One you should be familiar is iTunes metadata, [ Silence ]

when you have this sort of the song names and the artists [ Silence ]

and the album artwork in your iTunes library.

[ Silence ]

All these things are stored as the sort of metadata [ Silence ]

that I’m talking about today in the files [ Silence ]

in your iTunes library.

[ Silence ]

Besides iTunes metadata, we also have things [ Silence ]

like location information.

[ Silence ]

If you have a movie that you took with your iPhone [ Silence ]

or some other location-enabled device and you played it [ Silence ]

in QuickTime player, that will show up in the info window [ Silence ]

to tell you where you were when you took that movie.

[ Silence ]

That’s also stored as the kind of metadata [ Silence ]

that we are talking about today.

[ Silence ]

Some new features, we know that you’re not always standing still [ Silence ]

when you are taking your videos.

[ Silence ]

So new in iOS 8 and OS X Yosemite, we have features [ Silence ]

that support things like dynamic location, that’s a location [ Silence ]

that changes over time.

[ Silence ]

So these are some new features that we are going to be talking [ Silence ]

about later on that we are pretty excited about and, [ Silence ]

in addition to location, this really applies to any sort [ Silence ]

of metadata that you might want to add that changes [ Silence ]

over time in your movie.

[ Silence ]

This is a screen shot of a demo app we’ll show you later [ Silence ]

but the circle and the annotation text that’s all [ Silence ]

stored as the same sort of timed metadata as a timed location.

[ Silence ]

So hopefully that whets your appetite a little bit.

[ Silence ]

We’ll talk about what we’re going to cover today, [ Silence ]

we’re going to start, I’m going to give you an intro to metadata [ Silence ]

and AVFoundation, some of the classes that have been [ Silence ]

around for a while for describing all sorts [ Silence ]

of metadata, how to inspect that and how to author it.

[ Silence ]

We’re going to talk more [ Silence ]

about those new timed metadata features [ Silence ]

and then I’ll give you some best practices including some privacy [ Silence ]

things to keep in mind and some other best practices.

[ Silence ]

So our first topic: metadata in AVFoundation..What kind [ Silence ]

of classes are we going to be using to describe our metadata?

[ Silence ]

Well our primary model objects that we use [ Silence ]

to describe both movie files or HLS streams is AVAsset.

[ Silence ]

AVAsset can contain any number of AV metadata objects [ Silence ]

and each AVMetadataItem instance represents a single piece [ Silence ]

of metadata, either your track name or your album mark, [ Silence ]

even your location stuff that’s going to be separate pieces [ Silence ]

of metadata in our runtime environment.

[ Silence ]

So a closer look at AVMetadataItem: at its core, [ Silence ]

it has two properties.

[ Silence ]

The first is identifier, which is actually a new property [ Silence ]

and that is going to describe the kind [ Silence ]

of metadata that you have.

[ Silence ]

In this example we have the song name and it’s represented [ Silence ]

by this long symbol name, [ Silence ]

AVMetadataIdentifieriTunes MetadataSongName, [ Silence ]

and then you have the value [ Silence ]

which is the actual payload of the metadata item.

[ Silence ]

So for song name, it’s the name of the song as a string.

[ Silence ]

As an example for cover art, [ Silence ]

you can see that the value doesn’t have [ Silence ]

to be a string it can be an image or any other object [ Silence ]

that supports both the NSObject and NSCopying protocols.

[ Silence ]

Now if you’ve used AVMetadataItem [ Silence ]

in the past you might be familiar with the key [ Silence ]

and key space properties.

[ Silence ]

Well the identifier I mentioned that was new, [ Silence ]

it’s new because it is a combination [ Silence ]

of the old properties, key and key space.

[ Silence ]

So I’m not going to talking much about key and key space today [ Silence ]

but mostly going to be talking about identifier going forward [ Silence ]

as the way to describe your metadata.

[ Silence ]

Take a look at some of the built-in identifiers we have; [ Silence ]

this is just a tiny sampling.

[ Silence ]

There’s a lot of them.

[ Silence ]

You can find them [ Silence ]

in AVMetadataIdentifiers.h. I’ve arranged them here according [ Silence ]

roughly to the old notion of key space so that’s just sort [ Silence ]

of a sampling of the kind of metadata that we already know [ Silence ]

that you might want to represent.

[ Silence ]

Going back to the metadata item itself, [ Silence ]

we have a property that’s also new called dataType, [ Silence ]

which describes the native data type [ Silence ]

that your metadata is representing.

[ Silence ]

So for the case of our song name it’s stored as a string [ Silence ]

so we see that the data type is a UTF8 string; [ Silence ]

and these string constants [ Silence ]

that represent the different data types are all defined [ Silence ]

in CMMetadata.h. And besides the data type property, [ Silence ]

we also have several type coercion properties [ Silence ]

that you can use if you know you want to get your payload [ Silence ]

in the form of a certain type of Objective C object.

[ Silence ]

So you have string value, number value, date value and data value [ Silence ]

and those are going to give you exactly what you’d expect.

[ Silence ]

For the case of where our native payload is a string, [ Silence ]

only string value is going to give you an interesting answer.

[ Silence ]

The rest will give you NULL.

[ Silence ]

For our artwork example where the payload is a JPEG image, [ Silence ]

the top three are going to give you NULL [ Silence ]

and the data value is going to give you the NSData [ Silence ]

that you’re looking for to grab the bytes of the JPEG image.

[ Silence ]

There are examples where you can have more [ Silence ]

than one non-nil tech coercion method.

[ Silence ]

And one is creation date if you have the date represented [ Silence ]

as a standard string format for dates.

[ Silence ]

You can either get the actual string that was stored that way [ Silence ]

or you can ask the metadata item to give you an instance [ Silence ]

of NSDate that describes the same thing [ Silence ]

in a more convenient representation.

[ Silence ]

So that’s your brief intro to AVMetadataItem, [ Silence ]

we’re going to be talking a lot about it throughout the talk.

[ Silence ]

Let’s go back to AVAsset [ Silence ]

so we can see how we actually get these metadata items.

[ Silence ]

So the easiest way is just to ask for all of the metadata [ Silence ]

that applies to the entire asset.

[ Silence ]

There are types of metadata that apply to just parts of the asset [ Silence ]

but this is how you get the metadata to, like, the location [ Silence ]

and the song title that applies to the entire asset.

[ Silence ]

There’s also a way to get just a subset of the metadata.

[ Silence ]

We have this notion of metadata format but I’m not going [ Silence ]

to be talking about too much today but you can use it [ Silence ]

to get just that subset of the metadata.

[ Silence ]

So for our example, when we are getting iTunes metadata, [ Silence ]

we’re going to use that AVMetadataFormatiTunesMetadata [ Silence ]

and grab all of that using the metadataForFormat method.

[ Silence ]

And then from there we can use this filtering method, [ Silence ]

metadataItemsFromArray, filtered by an identifier [ Silence ]

to get just the items that correspond [ Silence ]

to the song name identifier.

[ Silence ]

You might be wondering why you can have more [ Silence ]

than one song name in a single asset?

[ Silence ]

We’ll get back to that in just a little bit but first I want [ Silence ]

to talk about how you load the payload of your metadata items.

[ Silence ]

AVMetadataItem conforms [ Silence ]

to the AVAsynchronousKeyValueLoading.h [ Silence ]


[ Silence ]

This is a protocol we define ourselves in AVFoundation [ Silence ]

and a lot of our core model objects conform to it [ Silence ]

because a lot of times when you get an AVAsset [ Silence ]

or an AVMetadataItem we haven’t actually loaded the data behind [ Silence ]

it yet.

[ Silence ]

So you can use this method, load values asynchronously for keys [ Silence ]

to load the specific values you want and they’ll do [ Silence ]

that asynchronously so you’re not blocking your main thread [ Silence ]

with some sort of synchronous I/O or something like that.

[ Silence ]

So for this case we have our metadata item.

[ Silence ]

We’re looking for the value so we just load the key value and, [ Silence ]

when we get our completion handler, [ Silence ]

we’re going to check the status to make sure [ Silence ]

that that loading succeeded and, assuming that we did, [ Silence ]

we can then just go ahead and grab the value [ Silence ]

and use it however we see fit.

[ Silence ]

So back to that whole multiple titles in one asset string.

[ Silence ]

Well, one reason we might have that is [ Silence ]

if we have the asset localized in multiple languages.

[ Silence ]

So an asset can have the same metadata items [ Silence ]

in multiple languages.

[ Silence ]

The example that we’re going to talk [ Silence ]

about is QuickTimeUserDataFullName [ Silence ]

dat identifier.

[ Silence ]

If you use this identifier in your files, [ Silence ]

then QuickTime Player, for example, can pick up the title [ Silence ]

and display it in the title bar.

[ Silence ]

So this is just yet another example [ Silence ]

of how metadata is used in our applications.

[ Silence ]

This particular example [ Silence ]

of the movie actually has the title available in both English [ Silence ]

and Spanish so here we have the English as the system language [ Silence ]

so QuickTime Player picks that up, picks up the English title [ Silence ]

but if we set our system language to Spanish it will pick [ Silence ]

up the Spanish localization of that title instead.

[ Silence ]

These are represented as two distinct pieces of metadata [ Silence ]

within the file and the way that you distinguish between them is [ Silence ]

that they’ll have different values [ Silence ]

for these final two properties of AVMetadataItem locale [ Silence ]

and extendedLanguageTag.

[ Silence ]

ExtendedLanguageTag is new in this release.

[ Silence ]

It’s a BCP 47 language tag and it’s particularly useful [ Silence ]

when you want to distinguish written languages.

[ Silence ]

So that’s one reason why you might have more [ Silence ]

than one metadata item with the same identifier.

[ Silence ]

So I mentioned before that not all metadata applies [ Silence ]

to the entire asset, well one example of that is metadata [ Silence ]

that only applies to a particular track, [ Silence ]

so for this example we have a special label attached [ Silence ]

to our subtitle track called SDH, that stands for Subtitles [ Silence ]

for the Deaf or Hard of Hearing [ Silence ]

and that’s basically just a more rich form of subtitles [ Silence ]

that includes things like labeling who’s talking [ Silence ]

and mentioning sound effects [ Silence ]

that are vital to the understanding.

[ Silence ]

We talked a little bit more about SDH and accessibility [ Silence ]

in general last year in our “Preparing and Presenting Media [ Silence ]

for Accessibility” talk so check that one out for more details.

[ Silence ]

For the purposes of this talk, [ Silence ]

just know that to get this SDH label here, [ Silence ]

it involves setting track-specific metadata.

[ Silence ]

So let’s talk about how you actually find [ Silence ]

out if your track has this metadata in it, [ Silence ]

well you’re going to use AVAsset track [ Silence ]

and it has pretty much the exact same API [ Silence ]

as AVAsset for reading metadata.

[ Silence ]

You have your metadata property; [ Silence ]

you have your metadataForFormat method and so if we want [ Silence ]

to find all the tagged characteristics that are [ Silence ]

in an asset track, we’re going to ask the track [ Silence ]

for its metadata for the FormatQuickTimeUserData.

[ Silence ]

Once we have that we use [ Silence ]

that same filtering method we saw before in order to get all [ Silence ]

of the items that have the identifier, [ Silence ]

QuickTimeUserDataTagged Characteristic.

[ Silence ]

So this is one example of tagged characteristics is the SDH [ Silence ]

that I just talked about and it’s the payload [ Silence ]

of the metadata items that tells you what kind [ Silence ]

of tagged characteristic you’re dealing with.

[ Silence ]

We’ll talk a little bit more detail about SDH [ Silence ]

and how you author it in just a little bit.

[ Silence ]

So going back to our list [ Silence ]

of identifiers you might have noticed some patterns [ Silence ]

if you were looking closely.

[ Silence ]

Each of these groups has their own version of a title [ Silence ]

or a song name or something like that.

[ Silence ]

We noticed that and come up with our own special kind [ Silence ]

of identifier called a CommonIdentifier [ Silence ]

which can be used when you want to look up, say, [ Silence ]

for this example a title without caring exactly how it’s stored [ Silence ]

in your file.

[ Silence ]

Same for copyright here; we also have a common identifier [ Silence ]

that represents copyright.

[ Silence ]

These are not the only common identifiers; [ Silence ]

there’s a whole list of them but these are just two examples.

[ Silence ]

So if we go back to our example where we’re looking [ Silence ]

for our iTunes song name, if we don’t actually care [ Silence ]

that the title of our asset is stored as iTunes metadata [ Silence ]

and we just want a title so we can display it somewhere, [ Silence ]

you can ask the asset for its array of commonMetadata [ Silence ]

and this is all the metadata items [ Silence ]

that can be represented using a common identifier.

[ Silence ]

Then you use that same filtering method we’ve been using [ Silence ]

to filter down to just the ones [ Silence ]

that have the CommonIdentifierTitle [ Silence ]

and you can go from there with your title.

[ Silence ]

Also worth noting is [ Silence ]

that AVAssetTrack has the same property, commonMetadata, [ Silence ]

so you can do the same thing over there as well.

[ Silence ]

So that is your brief introduction [ Silence ]

to inspecting metadata with AVFoundation.

[ Silence ]

Let’s talk a little bit about authoring.

[ Silence ]

If you want to make your own files that have say location [ Silence ]

or iTunes metadata in them, we have several different classes [ Silence ]

that can write movie files, AVAssetExportSession, [ Silence ]

AVAssetWriter and the capture movie and audio files outputs [ Silence ]

and these all have the exact same redirect property called [ Silence ]

simply, metadata.

[ Silence ]

So you give an array of metadata items and then [ Silence ]

that will be written out to the file.

[ Silence ]

Similarly for track-specific metadata [ Silence ]

like those tagged characteristics, [ Silence ]

you can use an AVAssetWriter input [ Silence ]

which also has the exact same property.

[ Silence ]

Now you are not limited to just writing out metadata [ Silence ]

that you got from somewhere else, like another file [ Silence ]

through the APIs we’ve been looking at.

[ Silence ]

You can also create your own metadata items [ Silence ]

with a mutable subclass of metadataItem.

[ Silence ]

And as you might expect, this just has read/write properties [ Silence ]

for all of the properties in AVMetadataItem.

[ Silence ]

So if we use an example of writing a subtitle track [ Silence ]

that is marked as SDH, well it’s actually two different tag [ Silence ]

characteristics that you have to use [ Silence ]

and so we’ll create two different metadata items, [ Silence ]

set both of their identifiers to the identifier we just saw, [ Silence ]

the QuickTimeUserData tag characteristic, [ Silence ]

but one of them will set the value [ Silence ]

to TranscribesSpokenDialogue ForAccessibility [ Silence ]

and the other will be DescribesMusicAndSound [ Silence ]


[ Silence ]

Then we get the subtitle AssetWriterInputthat’s going [ Silence ]

to write our subtitle track and set that array of the two items [ Silence ]

on our asset writer input.

[ Silence ]

So that’s how you would author a subtitle track [ Silence ]

that is marked as SDH.

[ Silence ]

Just one example of using tag characteristics [ Silence ]

in AVMutableMetadataItem.

[ Silence ]

Special note about AVAssetExportSession: [ Silence ]

by default the ExportSession is actually going to take any [ Silence ]

of the metadata that’s in the source asset [ Silence ]

that you’re exporting.

[ Silence ]

It’s going to copy that over to the output file.

[ Silence ]

Now that’s not the case if you set metadata [ Silence ]

on its metadata property.

[ Silence ]

That will be the signal to tell the ExportSession [ Silence ]

to ignore the metadata in the source file [ Silence ]

and instead write just what you put on the property.

[ Silence ]

So if you want to do augmentation of the metadata [ Silence ]

or some other sort of modification you’ll want to grab [ Silence ]

that array of metadata, make a mutable copy [ Silence ]

and do any adjustments that you want and then set [ Silence ]

that on the metadata property.

[ Silence ]

So that’s just a quick note about ExportSession.

[ Silence ]

The last note about authoring metadata is HTTP Live Streaming, [ Silence ]

this is actually a new feature in iOS 8, OS X Yosemite [ Silence ]

and you can use a new tag called session-data tag [ Silence ]

in your playlist, which has two required fields: [ Silence ]

the data ID which is a lot like our identifiers [ Silence ]

that we’re talking about, a URI which can point to the payload [ Silence ]

or a value which directly specifies the payload and, [ Silence ]

optionally, some language information.

[ Silence ]

So here’s an example that shows very similar [ Silence ]

to what we saw before with the titles [ Silence ]

in two different languages but this is the markup you’d use [ Silence ]

for HTTP Live Streaming.

[ Silence ]

So for more information on reading [ Silence ]

and writing metadata we do have some sample code, [ Silence ]

it’s called AVmetadataeditor and for more information [ Silence ]

about the details of writing HTTP Live Streaming metadata see [ Silence ]

the documents at this URL.

[ Silence ]

All right so that is your crash course in metadata [ Silence ]

in AVFoundation, our next topic is timed metadata.

[ Silence ]

So timed metadata, although I mentioned we have new features, [ Silence ]

it is not a new concept.

[ Silence ]

We supported the notion of chapters for quite some time [ Silence ]

and conceptually chapters are just an example [ Silence ]

of times metadata.

[ Silence ]

Each of these chapter markers is just a piece of metadata [ Silence ]

that is describing a particular range [ Silence ]

of the timeline of the movie.

[ Silence ]

That’s all that timed metadata is, [ Silence ]

it’s just metadata associated with a range of time.

[ Silence ]

So similarly, with our dynamic location example, [ Silence ]

we have the path that’s drawn here that’s really just composed [ Silence ]

of a number of pieces [ Silence ]

of metadata indicating the current location, [ Silence ]

each one of them associated with a particular time [ Silence ]

in the movie’s timeline.

[ Silence ]

So to demonstrate QuickTime Player’s features [ Silence ]

with dynamic location in Yosemite, [ Silence ]

I want to bring my colleague, Shalini, [ Silence ]

up to the stage for a demo.

[ Silence ]

Hi, I’m here to demonstrate how to read [ Silence ]

and play back metadata using QuickTime Player.

[ Silence ]

Here I have a movie file which has both audio and video [ Silence ]

and timed locations data stored in a different track.

[ Silence ]

So now if I bring this up in QuickTime Player, [ Silence ]

this is the usual UI for audio and video.

[ Silence ]

New in OS X Yosemite: [ Silence ]

in the Movie Inspector you can see a map view [ Silence ]

if your movie file has location data.

[ Silence ]

Your map view is presented along with the route [ Silence ]

where you have recorded this video.

[ Silence ]

So here the blue line indicates the path [ Silence ]

where we recorded the video and the red pin is an indication [ Silence ]

of the current location or the location [ Silence ]

on the timeline of the movie.

[ Silence ]

So if I zoom in a little bit and start play, [ Silence ]

you can see as the movie progresses the pin’s location is [ Silence ]

being updated to be in sync with the video.

[ Silence ]

I can drag the scrubber around [ Silence ]

and you can see the pin moving back and forth.

[ Silence ]

I can also go and click at any point in the map [ Silence ]

and you see the video seek to that location to present [ Silence ]

where your video was when you were at that location.

[ Silence ]

This is map view in QuickTime Player on OS X Yosemite.

[ Silence ]

Thank you, Shalini.

[ Silence ]

So let’s talk about what we just saw there.

[ Silence ]

So that location information was stored as timed metadata [ Silence ]

in the file and in order to have QuickTime Player draw [ Silence ]

that information on the map, we use AVAssetReader to read all [ Silence ]

of the location information from that asset.

[ Silence ]

And because timed metadata is stored in its own track, [ Silence ]

we use an AVAssetReaderTrackOutput to read [ Silence ]

that data and we use a new class called AVAssetReaderOutput [ Silence ]

MetadataAdaptor that knows how to give us that data in the [ Silence ]

from of a class called AVTimedMetadataGroup.

[ Silence ]

Then from there we can grab each location [ Silence ]

and draw that path on the map.

[ Silence ]

So AVTimedMetadataGroup is a very simple class.

[ Silence ]

It’s really just these two properties: [ Silence ]

an array of metadata items combined with a time range [ Silence ]

that describes where in the movie that data applies.

[ Silence ]

So to see a little bit of code for using AssetReader [ Silence ]

for this purpose, the first thing you want [ Silence ]

to do is find the track [ Silence ]

that contains your location information and we’ll talk more [ Silence ]

about how to do that in just a second.

[ Silence ]

Then you use that track to create an AssetReaderTrackOutput [ Silence ]

and you use nil output settings [ Silence ]

and then you’ll create your metadataAdaptor [ Silence ]

with that trackOutput.

[ Silence ]

And then, in a loop we just take your metadataAdaptor [ Silence ]

and call the nextTimedMetadataGroup method [ Silence ]

over and over again, doing something with each piece [ Silence ]

of data, like drawing it on the map [ Silence ]

until that method returns nil.

[ Silence ]

Then you know there’s no more data to draw.

[ Silence ]

So in terms of finding the right track to read, [ Silence ]

the way you’re going to do that is by examining the tracks [ Silence ]

in your asset and looking through the format description [ Silence ]

of each track to find the identifiers you’re looking for.

[ Silence ]

So you first start by getting the tracks [ Silence ]

with the MediaTypeMetadata and then for each [ Silence ]

of those tracks you’re going to loop through all [ Silence ]

of its format descriptions, usually there’s only one [ Silence ]

and for each format description you’re going to grab its list [ Silence ]

of identifiers using this function and check whether [ Silence ]

that identifier array contains the identifier you’re [ Silence ]

looking for.

[ Silence ]

In this case we’re looking [ Silence ]

for the location ISO 6709 identifier.

[ Silence ]

So once we’ve found it we’re good to go and we can resume [ Silence ]

with the code on the previous slide.

[ Silence ]

So that’s how QuickTime Player is drawing the map [ Silence ]

or drawing the path on the map before you start playback.

[ Silence ]

The other thing that QuickTime Player does, as you saw, [ Silence ]

is it can update the current location while you’re doing [ Silence ]

playback or even scrubbing around and the way it does [ Silence ]

that while it’s already playing the asset using an AVPlayerItem [ Silence ]

and we’re going to use a new class called [ Silence ]

AVPlayerItemMetadataOutput that you attach to your PlayerItem, [ Silence ]

which also notes how to vend this data in the form [ Silence ]

of TimedMetadataGroups.

[ Silence ]

But unlike the asset reader, instead of getting all the data [ Silence ]

up front you’re going to be getting it piece [ Silence ]

by piece as the movie plays.

[ Silence ]

So a little bit of code, [ Silence ]

you first create your metadata output using the [ Silence ]

initWithIdentifiers method [ Silence ]

and in this case we’re only interested in metadata that has [ Silence ]

that location identifier so that’s all we’re going to get [ Silence ]

by opting into this way.

[ Silence ]

Then you create a delegate that you define [ Silence ]

and that’s what’s going [ Silence ]

to receive the metadata during playback and you set [ Silence ]

that delegate on your output [ Silence ]

and tell us what cue you want us to send the data on.

[ Silence ]

Then you create or grab your AVPlayerItem and call addOutput [ Silence ]

to attach your output, to attach your output to the playerItem [ Silence ]

and finally make your player and associate your item [ Silence ]

with the player as the current item and start playback.

[ Silence ]

It’s important to get the smoothest playback experience [ Silence ]

possible, we highly recommend that you do all of this sort [ Silence ]

of setup work before you start playback [ Silence ]

or even attach the item to the player.

[ Silence ]

So a little bit of look at what your delegate method might [ Silence ]

look like.

[ Silence ]

There’s only one delegate method; it’s the metadataOutput, [ Silence ]

didOutputTimedMetadataGroups, fromPlayerItemTrack method.

[ Silence ]

And the first thing you want to do is grab an item [ Silence ]

that you can get your payload data from.

[ Silence ]

In this case, to keep things simple, [ Silence ]

I’m just grabbing the first item from the first group but keep [ Silence ]

in mind there could be multiple items, [ Silence ]

there could even be multiple groups.

[ Silence ]

One reason there could be multiple groups given [ Silence ]

to this method is that the metadata output will keep track [ Silence ]

of whether the metadata is coming faster [ Silence ]

than you’re processing it and, if it is, it will start to batch [ Silence ]

that up and give you the metadata in batches [ Silence ]

when you’re done with the previous batch of metadata.

[ Silence ]

So moving on with your item; you’re going [ Silence ]

to do this LoadValueAsynchronouslyForKeys [ Silence ]

dance that we talked about before.

[ Silence ]

In this case, we’re interested in the value [ Silence ]

and data type properties so we’re going [ Silence ]

to load both of those.

[ Silence ]

I’ve admitted the error checking for brevity here [ Silence ]

which you’ll probably want to do that error checking [ Silence ]

like we had in the other slide.

[ Silence ]

And once we have the completion handler we can ask the item [ Silence ]

for its data type and make sure that’s the data type we’re [ Silence ]

prepared to handle, in this case my code only knows how [ Silence ]

to handle location information in ISO 6709 format so we got [ Silence ]

to make sure that’s the right data type and from there we go [ Silence ]

and dispatch our code to the main thread [ Silence ]

that will update our UI.

[ Silence ]

So that’s how QuickTime Player is updating the location [ Silence ]

metadata during playback.

[ Silence ]

Of course this is not the first API that we have offered [ Silence ]

for reading timed metadata during playback.

[ Silence ]

There is an existing property called timedMetadata [ Silence ]

on AVPlayerItem but I’m here to say [ Silence ]

that the AVPlayerItemMetadataOutput [ Silence ]

replaces that property for all of these use cases.

[ Silence ]

Now we’re not deprecating the property yet [ Silence ]

but we do recommend, if you’re new to timed metadata, [ Silence ]

just adopt the metadataOutput [ Silence ]

and not worry about the property.

[ Silence ]

If you’re already using the property version we do recommend [ Silence ]

that you move over but just you know that you should make sure [ Silence ]

that your code is working properly after that transition, [ Silence ]

in particular I’ll point [ Silence ]

out that the metadataOutput will give you, for certain kinds [ Silence ]

of HLS content, will give more specific identifiers [ Silence ]

than the old property did.

[ Silence ]

So just make sure your code is prepared to handle that.

[ Silence ]

The last topic on reading timed metadata is Chapters.

[ Silence ]

Chapters, like I said, have been supported for some time; [ Silence ]

they even have their own API: [ Silence ]

chapterMetadataGroupsBest MatchingPreferredLanguages.

[ Silence ]

This is on AVAsset.

[ Silence ]

This will give you an array of timed metadata groups [ Silence ]

that contain items with the identifier, [ Silence ]

QuickTimeUserDataChapter, and we’ve supported this [ Silence ]

for some time for QuickTime movie files and M4Vs and, [ Silence ]

new in iOS 8 is the-and OS X Yosemite-is support for chapters [ Silence ]

in HTTP Live Streams as well as MP3 files.

[ Silence ]

And I’ll tell you more about how [ Silence ]

to author those HLS chapters in just a little bit.

[ Silence ]

So for more information, we have some sample code [ Silence ]

that does approximately what QuickTime Player is doing, [ Silence ]

where it can show your location during play back.

[ Silence ]

We also have a previous session about AssetReader that goes [ Silence ]

into much more detail than I did here, called “Working with Media [ Silence ]

in AVFoundation” from 2011.

[ Silence ]

So that’s how you read and play back timed metadata.

[ Silence ]

Our next timed metadata topic is how you can create your own [ Silence ]

movies that contain timed metadata.

[ Silence ]

We saw the screenshot before and I mentioned [ Silence ]

that these annotations are stored as timed metadata and, [ Silence ]

to show you this demo app, I’d like to invite Shalini back [ Silence ]

up on stage to demo it.

[ Silence ]

This time let’s look at an app on how [ Silence ]

to author your own custom metadata movie files.

[ Silence ]

Here I have a video and if I would like to share some notes [ Silence ]

with my friend, who is good at fixing colors in a movie, [ Silence ]

I can now do that within the app.

[ Silence ]

To add annotations, I use a two-finger gesture, [ Silence ]

I can use a pinch gesture to resize and then add a comment [ Silence ]

which is enough for my whoever looks at the video later [ Silence ]

to fix the colors there and then I begin playback.

[ Silence ]

And as playback progresses, I track the circle [ Silence ]

to where I want this to be fixed.

[ Silence ]

And now that I have this annotation and I can write it [ Silence ]

out along with the audio and video to do that, I hit “export” [ Silence ]

and now we see an AV player view controller [ Silence ]

which shows the exported movie along with the metadata [ Silence ]

which was written to it.

[ Silence ]

So if I start playback you see the annotation is moving along [ Silence ]

the timeline in the part in which I traced.

[ Silence ]

So if I scrub back in time you can see the annotation moving.

[ Silence ]

You might wonder that the annotation is baked [ Silence ]

into the video frame; it is not.

[ Silence ]

It is being rendered real-time using AVPlayerItemMetadataOutput [ Silence ]

and you can change the color or the font of the annotation.

[ Silence ]

So if I begin playback, you see the rendering is happening [ Silence ]

in real time.

[ Silence ]

That’s AVTimedAnnotationWriter, we have this available [ Silence ]

as a sample code as well, thank you.

[ Silence ]

So that was a great demonstration [ Silence ]

of not only the playback part of it but also how to write [ Silence ]

that data into the file, so let’s take a look at how [ Silence ]

that was accomplished.

[ Silence ]

So we’re going to use an AVAssetWriter to write the file [ Silence ]

and we’re going to use an AVAssetWriterInput in order [ Silence ]

to write that metadata track to the file.

[ Silence ]

Just like the reader side, [ Silence ]

the writer has a new class that’s a metadataAdaptor [ Silence ]

and that class knows how to interpret instances [ Silence ]

of AVTimedMetadataGroup and write that into the file.

[ Silence ]

See a little bit of code; first thing we’re going [ Silence ]

to do is create our AssetWriter Input.

[ Silence ]

We’re going to use the media type AVMediaTypeMetadata, [ Silence ]

once again nil outputSettings and we’re going to have [ Silence ]

to provide a clue to the source format, well, [ Silence ]

the format of the data that we’re going to be appending.

[ Silence ]

We’ll talk more about this [ Silence ]

and why it’s required on the next slide.

[ Silence ]

Then you simply create your metadataAdaptor [ Silence ]

with the reference to that input and, as you generate [ Silence ]

or receive your timed metadata groups, [ Silence ]

you simply use the appendTimedMetadataGroup method [ Silence ]

to continue to append those and write them to the file.

[ Silence ]

So what’s the deal with that source format thing.

[ Silence ]

Well, it turns out in order for AVAssetWriter to be able [ Silence ]

to write your metadata in the most efficient way possible, [ Silence ]

it needs to know up front exactly what kind [ Silence ]

of metadata it is going to be writing.

[ Silence ]

This will result in the most lowest storage overhead in terms [ Silence ]

of the number of bytes your file takes up [ Silence ]

and it also has a effect on how efficient it is [ Silence ]

to play back this kind of contents.

[ Silence ]

You don’t want to be using too much power [ Silence ]

when you’re playing this kind of content back.

[ Silence ]

So you do have some options in terms [ Silence ]

of how you actually construct one of these format hits.

[ Silence ]

If you’re reading [ Silence ]

from AVAssetReader you can actually ask the track [ Silence ]

that you are reading from to give you its list [ Silence ]

of format descriptions and use one of those.

[ Silence ]

If you’re creating the metadata group yourself or getting it [ Silence ]

from some other source then you can use a new method called [ Silence ]

copyFormatDescription that will give you back an instance [ Silence ]

of CM format description that will do this job for you.

[ Silence ]

It’s important to note that if you go this route you need [ Silence ]

to make sure that the contents [ Silence ]

of your metadata group are comprehensive in terms [ Silence ]

of it containing every combination [ Silence ]

of identifier data type and language tag [ Silence ]

that you are going to be appending.

[ Silence ]

That is, it contains an item with each of those combinations.

[ Silence ]

Of course, since the CM format description is a CF type, [ Silence ]

you’ll need a CFRelease app when you’re done.

[ Silence ]

Of course, there’s one more way you can do this: [ Silence ]

you can create the format description directly using [ Silence ]

CoreMedia APIs.

[ Silence ]

And here you use this long name CMMetadataFormatDescription [ Silence ]

CreateWith MetadataSpecifications function.

[ Silence ]

You’re going to pass in the metadataType box.

[ Silence ]

That’s the sort of metadata we’ve been talking [ Silence ]

about this whole time with timed metadata.

[ Silence ]

And these metadata specifications it’s just an [ Silence ]

array of dictionaries.

[ Silence ]

Each dictionary contains those combinations I was talking [ Silence ]

about before.

[ Silence ]

The identifier dataType and optionally extended language tag [ Silence ]

so you want to make one of these metadata specifications [ Silence ]

dictionaries for each combination you plan to append.

[ Silence ]

So the one thing that was not obvious about that demo is [ Silence ]

that we’re actually writing metadata timed metadata [ Silence ]

that describes one particular other track.

[ Silence ]

So for the example of these annotations, [ Silence ]

we’re really just talking about the video track of the movie [ Silence ]

and not the sound or anything else like that.

[ Silence ]

So just like we had a way of making track-specific metadata [ Silence ]

that applied to the entire track, [ Silence ]

with those tagged characteristics [ Silence ]

that we saw before, you also have the ability [ Silence ]

to formerly mark your metadata track [ Silence ]

as describing one particular other track.

[ Silence ]

You do that with the addTrackAssociationWith [ Silence ]

TrackOfInput method using as the parameter the AssetWriterInput [ Silence ]

that you are using to write your video track.

[ Silence ]

And your receiver is the input that you are using [ Silence ]

to write your metadata track.

[ Silence ]

You use the AssociationTypeMetadataReferent.

[ Silence ]

So that’s how your create metadata that’s timed [ Silence ]

but also specific to a particular track.

[ Silence ]

The next thing we did that was interesting in that demo is [ Silence ]

that we actually used our own custom identifiers.

[ Silence ]

So we had that big list of built in identifiers.

[ Silence ]

Well, you don’t have to use those; [ Silence ]

you can actually build your own [ Silence ]

and as I mentioned before an identifier is just a combination [ Silence ]

of key space and key and it has a particular format: [ Silence ]

it’s just a string but it is in a particular format [ Silence ]

so to help you make your own custom identifiers, [ Silence ]

we have this method, identifierFor Key, and keySpace, [ Silence ]

it’s a class method on AVMetadataItem.

[ Silence ]

There are some rules to follow: your key space needs [ Silence ]

to be four characters long if you want to use it [ Silence ]

for timed metadata [ Silence ]

so we actually recommend you use our built-in key space, [ Silence ]

the QuickTimeMetadata keySpace.

[ Silence ]

We also highly recommend you use reverse DNS notation [ Silence ]

for your custom keys to avoid collisions [ Silence ]

with other kinds of metadata.

[ Silence ]

So a brief code snippet you can see you can simply use this [ Silence ]

method to make your custom identifier and then set [ Silence ]

that on the identifier property of your mutableMetadataItem.

[ Silence ]

So in addition to custom identifiers you can also create [ Silence ]

your own custom data types.

[ Silence ]

So we’re all familiar by now, through this presentation, [ Silence ]

with some of the built-in data types that we defined; [ Silence ]

there’s a lot more than these [ Silence ]

but we’ve been using these quite heavily already.

[ Silence ]

These are really useful [ Silence ]

but sometimes you want your data type information [ Silence ]

to express more, maybe about the domain you’re working in, [ Silence ]

so if you are doing a serial number or a bar code kind [ Silence ]

of thing you might want [ Silence ]

to define a data type that’s this sort of serial number [ Silence ]

as string data type or barcode image as JPEG data type [ Silence ]

so you have more specific information [ Silence ]

about what your metadata actually contains.

[ Silence ]

The way that this works is, you have to tell us exactly how [ Silence ]

to serialize that custom data type and the way you do [ Silence ]

that is you tell us that your custom data type conforms to one [ Silence ]

of our built-in data types.

[ Silence ]

So in this case the serial number conforms [ Silence ]

to the UTF8 data type so under the hood it’s UTF8 string, [ Silence ]

but we know that it really represents a serial number [ Silence ]

and the same with the barcode image.

[ Silence ]

The way that you do this is you register your data type using [ Silence ]

the CMMetadataDataTypeRegistry RegisterDataType function that’s [ Silence ]

defined in Core Media.

[ Silence ]

You can’t create your own custom base types [ Silence ]

but you can create your own custom type that conforms [ Silence ]

to our raw data built-in type [ Silence ]

if your data type really is just a custom sequence of bytes.

[ Silence ]

So there are some rules to using AVAssetWriter [ Silence ]

for writing timed metadata.

[ Silence ]

Most importantly, every metadata item that you append has [ Silence ]

to have non-nil values for identifier, data type and value.

[ Silence ]

Your identifier has to conform to the format that we specify, [ Silence ]

so we highly recommend using that utility method [ Silence ]

that we just talked about.

[ Silence ]

The value has to be compatible with the data type [ Silence ]

so you can tell us that your NSString value is an UTF8 string [ Silence ]

but don’t try telling us [ Silence ]

that your custom class is a UTF8 string because we won’t know how [ Silence ]

to serialize that properly and the AssetWriter will fail.

[ Silence ]

As I mentioned before, you have to create your AssetWriterInput [ Silence ]

with a format hint and that must be comprehensive [ Silence ]

and we described that before.

[ Silence ]

So the last topic about AssetWriter [ Silence ]

and timed metadata is a recipe for creating your own movies [ Silence ]

that have the same sort of dynamic location [ Silence ]

that we’ve seen a couple of times already.

[ Silence ]

To do this, you can use AVCapture audio [ Silence ]

and video data outputs and target that data [ Silence ]

at twin instances of AssetWriterInput and, [ Silence ]

at the same time, grab information from Core Location [ Silence ]

that represents the location information and write [ Silence ]

that to its own AssetWriterInput.

[ Silence ]

For more detail about how to do that we’ve actually implemented [ Silence ]

that and made it available as sample code, [ Silence ]

so see AVCaptureLocation if you want to make your own movies [ Silence ]

that contain dynamic location.

[ Silence ]

We also have sample code as Shalini mentioned [ Silence ]

for the demo we just showed you, [ Silence ]

that’s called AVTimedAnnotationWriter.

[ Silence ]

And, of course, for more information about AssetWriter [ Silence ]

in general, see that same talk I referenced earlier: [ Silence ]

“Working with Media in AVFoundation”.

[ Silence ]

Last two quick topics about timed metadata: ExportSession.

[ Silence ]

Just like we’ve said the asset ExportSession will [ Silence ]

by default pass through any of your metadata that applies [ Silence ]

to the entire asset or entire track, it will pass [ Silence ]

that through, copy it to the output file.

[ Silence ]

It will do the same thing with timed metadata that exists [ Silence ]

in the source file provided [ Silence ]

that your destination file type is QuickTime Movie.

[ Silence ]

We’ll talk more about file types in just a little bit [ Silence ]

but basically ExportSession behaves exactly [ Silence ]

as you would expect.

[ Silence ]

In our last timed metadata authoring topic is HTTP Live [ Silence ]

Streaming chapters so if you want to author chapters [ Silence ]

in your HLS stream, you can use the session-data tag we talked [ Silence ]

about earlier and the special data ID, com.apple.hls.chapters.

[ Silence ]

Your URL should point to a JSON file [ Silence ]

that describes the chapter information for that stream and, [ Silence ]

of course for more detail on this, see that same link [ Silence ]

that I referenced earlier for HTTP Live Streaming.

[ Silence ]

All right, so that is timed metadata, [ Silence ]

our next topic is privacy.

[ Silence ]

Why is privacy important in this context?

[ Silence ]

Well, any time that you are writing your users data [ Silence ]

to a file you need to be at least considerate [ Silence ]

about their privacy and be aware that the metadata that you write [ Silence ]

out to these movie files can contain user identifiable [ Silence ]

information, the most obvious example of that is location.

[ Silence ]

And so because movie files can be distributed and we want [ Silence ]

to protect the privacy of our users, [ Silence ]

for our built-in sharing services, we do our best [ Silence ]

to strip out any potentially user identifiable information, [ Silence ]

such as this location and we recommend that you do the same.

[ Silence ]

So we’ve given you a utility [ Silence ]

for that called AVMetadataItemFilter.

[ Silence ]

Right now there is only one filter that we make available [ Silence ]

but it is geared towards privacy, [ Silence ]

it is the metadata item filter for sharing and that will strip [ Silence ]

out any of this sort of user identifying information [ Silence ]

that we’re talking about; location is only one example.

[ Silence ]

But it will also strip out anything it doesn’t recognize, [ Silence ]

because it doesn’t know whether [ Silence ]

that might contain user identifiable information.

[ Silence ]

So that includes any metadata that uses identifiers [ Silence ]

that you define yourself.

[ Silence ]

It will leave in some things like metadata that’s important [ Silence ]

to the structure of the movie [ Silence ]

and chapters are the best example of that, [ Silence ]

and also any commercial related data like your Apple ID.

[ Silence ]

So to use the MetadataItemFilter you’re going to first [ Silence ]

of all create your filter and feed it your original array [ Silence ]

of metadata items using this metadataItemsFromArray [ Silence ]

filteredByMetadataItemFilter method.

[ Silence ]

This is a companion to that other filtering method based [ Silence ]

on identifiers we’ve been using all day [ Silence ]

and then once you have your filtered array [ Silence ]

of metadata items just set that on your AssetWriter [ Silence ]

or ExportSession as you normally would.

[ Silence ]

Well actually I mentioned ExportSession [ Silence ]

but things can be simple if you’re using the ExportSession [ Silence ]

and only want to copy the metadata [ Silence ]

from the source asset and not add your own.

[ Silence ]

You just set the filter on the ExportSession [ Silence ]

and it will actually do the filtering for you, [ Silence ]

this will filter both static and timed metadata [ Silence ]

but it will only filter the metadata from the source asset.

[ Silence ]

If you set your own metadata on the metadata property, [ Silence ]

it won’t filter that for you; you’ll need to do the process [ Silence ]

that I just described of doing the filtering yourself.

[ Silence ]

The only other thing to keep in mind is [ Silence ]

that the export may take more time [ Silence ]

when the filter is being used because it has to go through [ Silence ]

and examine all of the metadata items.

[ Silence ]

So that’s privacy.

[ Silence ]

Our last section of the talk today is some assorted best [ Silence ]

practices when you are writing your own files [ Silence ]

that contain metadata.

[ Silence ]

First up, what if you’re writing timed metadata [ Silence ]

and you have multiple streams of metadata [ Silence ]

that use different identifiers.

[ Silence ]

How do you get those into the same file?

[ Silence ]

Well, we actually have the situation in the demo app, [ Silence ]

we have that circle is comprised of two different pieces [ Silence ]

of information, the position and the radius.

[ Silence ]

So we’re representing these [ Silence ]

and the demo app is two distinct streams of metadata.

[ Silence ]

And so the most obvious way I can think of to get this [ Silence ]

into a file is to use two different AVAssetWriterInputs, [ Silence ]

which result in having two metadata tracks [ Silence ]

in the output file, pretty simple.

[ Silence ]

But there is another way you can do it, [ Silence ]

you could instead combine those two different types of metadata [ Silence ]

into one timed metadata group and write [ Silence ]

that to a single AssetWriterInput [ Silence ]

and that will result in only one metadata track [ Silence ]

in the output file that contains multiple different kinds [ Silence ]

of identifiers.

[ Silence ]

There are some advantages to this approach, not the least [ Silence ]

of which is it can result in lower storage overhead [ Silence ]

and therefore as we always see more efficient playback.

[ Silence ]

But there are of course pros and cons to everything.

[ Silence ]

So you’ll definitely want to consider combining [ Silence ]

into one track your different metadata [ Silence ]

if they are used together during playback [ Silence ]

and they have identical timing.

[ Silence ]

This is definitely the case with the example we just saw [ Silence ]

with the circle center and the circle radius.

[ Silence ]

If these are not true then you might not want to combine.

[ Silence ]

And in fact one instance where you definitely do not want [ Silence ]

to combine, is if you have one type [ Silence ]

of metadata that’s associated with another track in the file, [ Silence ]

so that’s like our annotations are associated [ Silence ]

with the video track, but then you have another type [ Silence ]

of metadata like location that is associated [ Silence ]

with the entire asset.

[ Silence ]

You don’t want to combine those into one track, [ Silence ]

otherwise your location in that example will become mistakenly [ Silence ]

associated with just the video track, [ Silence ]

and that’s not what you want.

[ Silence ]

So that’s how to deal [ Silence ]

with multiple streams of timed metadata.

[ Silence ]

Next topic is duration of your timed metadata groups, [ Silence ]

when you get a timed metadata group [ Silence ]

from AVFoundation it’s always going [ Silence ]

to have a fully formed time range.

[ Silence ]

So that means it will have a start time and a duration.

[ Silence ]

We actually recommend when you make your own timed metadata [ Silence ]

groups for a pending with the AVAssetWriter [ Silence ]

that you don’t bother giving us a duration.

[ Silence ]

And to see how that works, here’s an example of a group [ Silence ]

that starts at time 0 but it doesn’t have a duration [ Silence ]

so how do we know when it ends?

[ Silence ]

Well, of course we’ll wait until you append the next one [ Silence ]

and then we’ll say that, “Okay, the end time [ Silence ]

of the first group is the same [ Silence ]

as the start time of the next one.”

[ Silence ]

So this ensures that your metadata track is going [ Silence ]

to have a continuous stream of contiguous metadata and we think [ Silence ]

that for most cases this is the best way to store your metadata.

[ Silence ]

The way you accomplish this is, [ Silence ]

when you’re making your time range, [ Silence ]

you just use KCMTimeInvalid for your duration [ Silence ]

and we’ll take care of the rest.

[ Silence ]

We do recognize that there are cases where you might not want [ Silence ]

to have contiguous metadata, you might want [ Silence ]

to author an explicit gap into your metadata stream and so, [ Silence ]

for that, our recommendation is that you give us [ Silence ]

in the middle there a group that contains zero items.

[ Silence ]

This is the best way to author a gap in the metadata.

[ Silence ]

And you can see we just do that by presenting an empty array [ Silence ]

when we’re creating our timed metadata group.

[ Silence ]

Notice that we’re still using KCMTimeInvalid [ Silence ]

for our duration here.

[ Silence ]

Just tell us when the beginning of the metadata silence, [ Silence ]

so to speak, is and we’ll figure out how long it lasts based [ Silence ]

on when you append your next non-empty group.

[ Silence ]

So that’s how you write gaps in your metadata.

[ Silence ]

Our last best practice, [ Silence ]

I mentioned output file type before [ Silence ]

and here’s the longer explanation.

[ Silence ]

Well, AssetWriter [ Silence ]

and AssetExportSessions support writing [ Silence ]

to a wide variety of file types.

[ Silence ]

You’ve got QuickTime movie, MPEG4, and all sorts [ Silence ]

of other kind of file types [ Silence ]

and those file types can carry different kinds of metadata; [ Silence ]

some have more restrictions than others about what kind [ Silence ]

of metadata can go into that file type.

[ Silence ]

So the easiest situation, say if you have an ExportSession [ Silence ]

and you’re going from one, from the same file type [ Silence ]

as your source to the output, [ Silence ]

so for this example they’re both QuickTime movie files.

[ Silence ]

This is the easiest way to ensure that all [ Silence ]

of that data is actually going to make it into the output file.

[ Silence ]

If instead you’re using a different output file type, [ Silence ]

like MPEG4 in this example, [ Silence ]

then some different things are going to have to happen.

[ Silence ]

You notice those last few items didn’t quite make it [ Silence ]

into the output file; [ Silence ]

it’s because they have no equivalent representation [ Silence ]

that works with an MPEG4 file.

[ Silence ]

If you’re looking closely you’ll also notice [ Silence ]

that those top two items have changed, [ Silence ]

although they sound very similar they are slightly different [ Silence ]

identifiers because that’s the kind of identifier [ Silence ]

that works with MPEG4.

[ Silence ]

So both AssetExportSession and AssetWriter will do the sort [ Silence ]

of three step process.

[ Silence ]

First, they’ll try to pass that data through directly [ Silence ]

if possible and, if not, they’ll try to convert the identifier [ Silence ]

into an equivalent representation [ Silence ]

in the output file type.

[ Silence ]

If neither of those work, we have no choice but to just drop [ Silence ]

that piece of metadata on the floor.

[ Silence ]

So in terms of guidance on how to choose an output file type, [ Silence ]

well, my two recommendations are, [ Silence ]

if you are using say an ExportSession to copy all [ Silence ]

of the metadata, timed or otherwise, from the source asset [ Silence ]

to your destination file, the best way is to try [ Silence ]

and use the same file type that you started with, [ Silence ]

and if you don’t know what the file type is you can use the [ Silence ]

NSURLTypeIdentifierKey to find out.

[ Silence ]

You can also always use the QuickTime Movie file [ Silence ]

because that is going to have the greatest chance [ Silence ]

of supporting your metadata no matter where it came from.

[ Silence ]

If AVFoundation supports it, there’s a good chance [ Silence ]

that it will be supported by the QuickTime movie file.

[ Silence ]

Of course, this is the only way [ Silence ]

if you’re writing timed metadata, [ Silence ]

to get your timed metadata into a file is [ Silence ]

to use QuickTime Movie file; it’s the only file form [ Silence ]

that supports it right now.

[ Silence ]

Of course good advice is always [ Silence ]

to check the results, no matter what.

[ Silence ]

Check that your output files contain the kind of metadata [ Silence ]

that you expect, all the metadata that you expect [ Silence ]

and you can choose to use some of the APIs [ Silence ]

that we’ve already talked about if you want [ Silence ]

to do that at runtime.

[ Silence ]

Some guidance if that doesn’t end up being the case: [ Silence ]

if you don’t get all the metadata that you expect, well, [ Silence ]

you can try to do the conversion yourself.

[ Silence ]

Especially if you have a custom identifier and are going [ Silence ]

to a file type that doesn’t support your custom identifier, [ Silence ]

take a look at that long list of built-in identifiers we have [ Silence ]

and see if there is something that’s roughly equivalent [ Silence ]

to what you’re trying to store and you can do [ Silence ]

that conversion yourself.

[ Silence ]

One particular example I want to call [ Silence ]

out that involves only built-in identifiers is [ Silence ]

when you’re trying to go from ID3 to iTunes, [ Silence ]

well AVFoundation currently isn’t going to do [ Silence ]

that conversion for you.

[ Silence ]

But there’s no reason you couldn’t do that yourself, [ Silence ]

so once again just take a look at our long list of identifiers [ Silence ]

and match them up and do the conversion in your own code.

[ Silence ]

So that is the end of the talk.

[ Silence ]

See what we covered: we talked obviously a lot about metadata [ Silence ]

in AVFoundation, we talked about all [ Silence ]

of the different classes you can use for inspection, [ Silence ]

we talked about AVAsset and AVMetadataItem [ Silence ]

and how those work together and also authoring, [ Silence ]

we talked about the AssetWriter, [ Silence ]

the AssetExportSession even briefly on the capture audio [ Silence ]

and movie file outputs.

[ Silence ]

We dove into timed metadata, including all the new features [ Silence ]

that enable things like the dynamic location [ Silence ]

and your own timed metadata like the annotation demo.

[ Silence ]

We also talked about privacy considerations [ Silence ]

and some best practices like how to choose the right file type.

[ Silence ]

So for more information, you can contact our evangelism team [ Silence ]

or see our programing guide, [ Silence ]

there are some other related sessions you might be [ Silence ]

interested in.

[ Silence ]

If you missed this morning’s presentation [ Silence ]

on “Modern Media Playback”, you can catch [ Silence ]

that on the video recording.

[ Silence ]

Tomorrow there is also a camera capture talk focusing [ Silence ]

on manual controls.

[ Silence ]

And on Thursday we’ll have a talk about direct access [ Silence ]

to video encoding and decoding, which I’m sure a lot [ Silence ]

of you will be interested in.

[ Silence ]

Apple, Inc. AAPL
1 Infinite Loop Cupertino CA 95014 US