Simplifying iPhone App Development with Grand Central Dispatch 

Session 211 WWDC 2010

Grand Central Dispatch makes it faster and easier than ever before to write asynchronous code. Learn new techniques that will help your app get more done while remaining responsive to the user. Discover how Grand Central Dispatch can help simplify or even eliminate complicated threading code.

Daniel Steffen: Good morning.

Welcome to the Simplifying iPhone App Development with Grand Central Dispatch.

Thanks for being here on a Friday morning after the beer bash for learning more about GCD.

My name is Daniel Steffen, I’m an engineer on the GCD team, and this morning we’ll learn about a quick technology overview to start with, then go through a few examples of how to simply your existing multithreaded code by using GCD.

Then my coengineer will come up to talk about some design patterns that we encourage you to use when you program your GCD, and finally going to some of our GCD objects in depth and look at their APIs.

So we’ve introduced GCD and blocks last year at WWDC for Mac OS X Snow Leopard.

And this year, we’re very pleased to bring it to you in iOS 4 for the iPhone.

As you can see here, Grand Central Dispatch lives very low down in our technology stack.

This allows it to make very efficient use of kernel resources, kernel primitives when necessary, and also all the higher up levels on the stock to use it, and indeed many of our system fragments already do and now so can your application.

So as you saw in that picture, GCD is part of libSystem, that means that it comes along with very basic system functionality like malloc and you don’t have to provide any special linker setup to get it.

It’s available to all apps and all you do is include the dispatch header to get started.

Note that the GCD API has both a block-based variant and the function-based variant.

Today, we’re only going to be talking about the block-based variant.

We feel that that’s the most convenient for new code or even for existing check your C-based code if you have C-based function based code that you’ve already affected.

You may be interested in the function-based variant of the API and we’ve encouraged you to look at the documentation for that.

So let’s do a quick recap of our first session this week, on Wednesday.

If you were unable to attend that, the you’d be glad to hear that we’ll be repeating a session this afternoon at 2 o’clock so you may want to come to that, and in that session we looked at blocks and an introduction to blocks and why they are good to and very convenient to encapsulate units effect that you want to do maybe on a different thread or maybe asynchronously or just as iterations, and we looked at how to use dispatch_async to get these blocks to run somewhere else, and then when they’re done, get the results back to the main thread, with another dispatch_async.

And the location where these blocks run are called queues, GCD queues, dispatch queues.

And we looked at the details on those, they’re really just a lightweight list of blocks that you have committed for execution, and the enqueue and dequeue operations on those FIFO.

Then we looked at two types of queues, one is the main queue which you get with the dispatch_get_main_queue API and that’s tied to the main thread, the main run loop and you can use it to execute blocks that update the UI for instance.

And we looked at queues that you can grant yourself called serial queues as well which execute blocks one by one but on an automatic helper thread in the background.

So a quick animation to remind you what async looks like.

We have the main queue here tied to the main thread, and one of these dispatch queues that we’ve created.

The main thread creates a block for some unit of work that it wants to do somewhere else, and that block can capture a state as well data, as well as the code.

And now this does the dispatch_async which enqueues that block onto the dispatch queue, then the system notices that there’s some work available creates an automatic thread for you and runs the block.

When done, the automatic thread goes away again and this is returned to steady state.

So let’s look at how you can simplify your existing multithreaded code, GCD.

Why would you want to do this?

You have your code already, it works but maybe the threading that you have to do the code and you have to write to implement the threading was very complicated.

GCD brings advantages to this that may make you want to adopt it even for a code that you already have.

In particular, it’s very, very efficient.

This leaves more CPU cycles for your code and fewer CPU cycles are used to maintain threading to synchronization or interthread communication.

Also, GCD provides much better metaphors for multithreaded programming.

As we’ve seen in our first session, blocks are very, very easy to use when doing multithreaded programming and allow you to express what you want to do somewhere else in line directly at that point where you think, now I want to do something.

You do not have to push it somewhere else in your source files.

And queues and the queue primitive in GCD is in turn inherently friendly to producer-consumer schemes, meaning that you don’t have to implement your own facilities to keep track of work that needs to be done on a background thread and have to communicate results back and forth.

And GCD provides a system-wide perspective on thread usage in your app, and this is really something that only an OS facility can do for you if you have threads in your application that you can manage yourself.

You cannot really share those threads with a framework that may also be using threads or with some third party library that you use that may be using its own threads, and you cannot balance those subsystems on either an OS facility can really do that for you.

GCD is also very compatible with your existing code so you don’t have to adopt it all at once.

It might make sense to just start using it in new code that you add to the app and the existing threading and synchronization primitives that we are using currently will be 100 percent compatible.

And in fact, GCD threads are just wrapped POSIX threads.

This means also that because the system creates those threads for you, you should not modify them in ways that will change them irreversibly, in particular don’t cancel, exit, kill them, etcetera.

You didn’t create those threads, don’t destroy them.

And because GCD reuses threads, you may be may have to be careful about modifying any thread states, like for instance the current working directory in one of your blocks.

If you do that, make sure that you return it to the state you found it in because the thread will be reused and will execute another block, maybe from a different queue, maybe from a system queue that you don’t even know about and the block that comes after you will then find this otherwise find a state that it doesn’t understand.

So let’s talk about threads in some more detail.

Why use threads on the iPhone?

There’s only one core.

Well, as we’ve gone over in some detail in our first session, the main reason is to increase app responsiveness to get work off the main thread.

The main thread should really only be handling UIEvents and updating the user interface.

It shouldn’t be doing anything else and you probably are doing this currently and managing your threads yourself by using NSThread or even pthread_create directly.

And as you know, this has nonmaterial costs, both in manpower needed to write that code, maintain that code, design your threading scheme, as well as in CPU and memory costs maybe to bring out the thread frequently on demand or otherwise have long running threads hanging around for the lifetime of the application.

So let’s look have a look at some code that you might be using currently to create on-demand threads.

Imagine we have an Objective-C method that needs to do a time consuming operation and wants to spin up an on-demand thread for that.

So you would have to using NSThread API, you would have to initialize that thread with the special purpose selector that runs that thread’s code and run the thread, auto release it, and implement that method, and then you set up the thread and finally do the long running operation.

How could we improve on that by using GCD?

Very similar, but we have a lot less boilerplate.

We create a dispatch queue with the dispatch_queue_create API passing in the label that allows you to identify your queue in Xcode or in the crash reporter and then with just dispatch_async, a block that does a long running operation to this queue.

And that is all that is needed.

You can release the queue if you don’t need it anymore at this point and GCD will do all the thread management for you, do all the resource management for this queue for you.

So what are the advantages?

Firstly, it is more convenient.

You can use blocks, you can remove a lot of your boilerplate and you don’t have to do any explicit thread management.

You don’t have to decide when to create these threads and whether it’s a good idea to have many threads or just one, GCD will do that for you.

And it is more efficient, because GCD recycles threads, if you use it throughout, you will end up with fewer threads overall than if you manage them all yourself just because we have a better view of the thread usage throughout your application.

And if you submit too many blocks to your queues, we will actually defer some of them based on the availability of the system and make things more efficient that way.

The next topic to look at is locking.

Obviously when you’re doing multithreaded programming, you need to worry about locking usually to enforce some mutually exclusive access to critical sections of code or to serialize access, serialize access to some shared state between threads or just in general to ensure data integrity when you have some complex piece of state that needs to always have a consistent in a consistent state and that can be modified from multiple threads.

So, let’s look at how you might be doing that currently with NSLock API.

Here, as an example you have an Objective-C method that maintains an imageCache object that can be modified from multiple threads.

So you need to have some serialization mechanism to ensure that our shared object is updated safely so we would be maintaining, you know, this lock object is part of our state and lock it before we enter the critical section that then modifies that shared object and unlock when we are done.

One thing to note here is that if you have any early exit conditions or error conditions that you have to not forget to unlock, it is an easy source of bugs when you use locking APIs.

How can we make this better and easier to use with GCD?

Very similarly, we will maintain a dispatch queue, one of the dispatch queues that you create yourself with dispatch_queue_create.

That’s part of our state, and now use the dispatch_sync API to execute a block that wraps the whole critical section on that queue.

Now, this is a slight change in perspective but it really has the same effect.

These queues will only ever run one of these critical section blocks at any time, so once you’re inside the block, you know that you’re the only one executing on this queue so the only critical section block that is executing at this time.

So, it’s just as if you had locked runs this critical section.

And note that we don’t have to be anything special to in the early exit case, there is no unlocking returning from this block is unlocking in this pattern, so this removes that whole source of issues.

Right, you can do things that you cannot do with the NSLock or other locking APIs with queues.

You can implement and defer to critical section and here you will just use dispatch_async and async that critical section block.

And for this, this would make a whole lot of sense.

Probably the color of the updateImageCache method doesn’t care that the cache is updated right away, as long as it’s updated at some point safely, right?

So, the color doesn’t need to wait for the synchronized critical section block to actually execute so they can use dispatch_async to do this in a different way.

And again, once we’re inside we know that we’re the only ones with those critical section blocks that is executing now so we can safely modify the shared object.

So the advantage of using GCD instead of locks and GCD queues is that it is safe.

You cannot forget to unlock, right, and it is more expressive.

It allows you to say things like the deferrable critical section example that we saw.

And again, it is more efficient.

In fact, using queues instead of locks is much more efficient in general.

If there’s no contention, you can use this implement’s wait-free synchronization.

You can think of queues as an on-demand locking mechanism.

Only if there’s contention, we would actually create the expensive lock and the associated interthread communication.

And if there is, if it detects that nobody else is currently in the critical section, it will be very, very cheap.

Next stop is interthread communication.

If you have multiple threads, you have to be able to communicate among them to send messages or to maybe wake up the background thread that may be waiting to do some work or to transfer data or data ownership at least between these threads, and the general mechanism that you use information for this will be the performSelector family of methods, and here we have four that have a relationship to threading.

PerformSelectorOnMainThread, we’ve looked at that in some detail in the first session, so we’ll skip that one and look at the other three.

So how would you do performSelector:onThread with GCD?

So this runs specified selector on one of your manually managed NSThreads.

And with GCD, instead, we will just use a queue, and in the waitUntilDone note case, where you don’t have to wait until the selector is executed.

Just use a dispatch_async.

And inside the block, you can do everything that the selector that you were performing before could have done, and indeed, you don’t need to implement that selector now, right.

You can do it inline at the point where you had to perform selector before.

If you have to waitUntilDone, you just use dispatch_sync in the cycling.

For a performSelector afterDelay, which executes a specified selector after some amount of time, we provide an API called the dispatch_after.

This is exactly like dispatch_async, except that the enqueueing happens after user specified delay.

I won’t go into the detail on how to get the delay.

It’s a pretty simple API that you can look up in the documentation.

But here, this will just run this block after 50 microseconds in this example.

And again, because it’s a block, you don’t have to implement the special selector.

You can just do what you want to do in the delayed fashion inline.

Perform selector in background is the foundation facility that does create an on-demand thread for you.

And in this case, you don’t really care when and in relationships to what other things you select the runs.

So here, we don’t force you to create one of these dispatch queues.

You can get one of our global queues that we provide, and we’ll talk about this more in a second, and just again, do a dispatch_async to this global queue.

So using GCD for interthread communication is more flexible, mainly because of blocks.

They can call any selector and multiple selectors, and there is no need to marshal and unmarshal, pack and unpack arguments, as we have seen in our first session.

You can call any form of selector that can take any number of arguments and don’t have to do the usual dance that you have to in perform selector.

And it is also more efficient again, because, well, firstly, the block capturing arguments is more efficient in you marshaling and unmarshaling them, and we wake up helper threads, help create them on demand.

So you don’t have to figure out how to do efficient on-demand thread management yourself.

So let’s talk about these global queues that we just saw in more detail.

Global queues also, like the queues that you create yourself, have enqueue and dequeue operations that are FIFO.

The big difference is that they may execute the blocks on them concurrently.

This means that the completion order of the blocks on these queues may not be FIFO.

And you get them at this dispatch_get_global_queue API, passing in a priority argument.

So let’s see a quick animation of what this would look like.

This thread here will enqueue some blocks onto the global dispatch queue.

So as you can see, they get enqueued in order.

And now, the system notices there is some work to be done, so it creates automatic threads two in this case, and it will start running these blocks.

And now, that A has completed, and now C will complete first, and B second.

So this is the out of order part for the completion, alright.

And when the work is done, the automatic threads go away again, and you return to a steady state.

So global queues are actually where GCD activity of all types is mapped to real threads.

So we allow you to control the priority of these threads with three bands, and this is the flag that we the argument that you would passing into the API that we just saw.

So we provide high default in the low priority band.

And you can use these same bands to control the priority of the queues that you create yourself, the threads that run the blocks on those queues, and you will see you how to do that later on in the session.

OK. So I would like to hand it over to Shiva now for a section on GCD design patterns.

Shiva Bhattacharjee: Thanks Daniel.

Wow, we have a full house.

Either you’re very serious programmers or you’re not drinking hard enough.

[ Laughter ]

So Daniel showed how you can take your existing code.

He went through snippets of code where you can take this code and use GCD to simplify it.

And in this session, we are going to kind of move back and see some of the ways you can use GCD to your application to simplify the overall design patterns.

We kind of mentioned this in the previous session.

GCD is this new way or new paradigm of thinking which would help you to move from your application logic into our implementation, much, much, much simpler.

So, one of the patterns that was kind of came out of the last session was to assign a queue to each task.

The idea here is you take your application and you break it down into different tasks, and then you can work on those tasks individually and they will complete as data as available to them or in different priority orders.

Well, when tasks are executing simultaneously, you need ways to communicate between those tasks and you saw that you can do this easily by calling dispatch_async, you you dispatch_async from one task to another task and just passing the queue to which you want to communicate.

And most of the time, you will be passing data.

So you can imagine, you have a network application, this is reading in data from a socket and then it wants to do the parsing on another task.

So, one task could be responsible for actually doing the read syscall, and then once you have the bytes, you want to send it off to another task to do the parsing.

And with blocks, you can easily encapsulate that and move it over.

So GCD helps you that.

So, one of the things you might wonder, well, is queues lightweight.

I mean if I if my application is broken down into many queues, is this actually going to main issue for performance.

And it’s not.

As you have seen, we do automatic thread recycling.

So if you actually have a lot of queues, it doesn’t necessarily mean we are committing that many threads at that point.

So as work become available, we will create these threads on demand, so it is very efficient.

So we encouraged that you try to think your applications in this terms.

It will just help you with overall GCD APIs.

The other pattern we want to encourage is to use low level notifications.

So this is something that you’re probably already familiar with.

The idea here is, just as in UI, you’re touching on a button.

You’re doing something under control, and then you’re responding to those events.

So similarly, you want GCD to help you monitor events and then respond on them on demand.

So in this case again, if you take the example of a network app, you don’t want to go ahead in your block and do the blocking read syscall or the receive syscall because that’s just going to block the thread and wait for data to arrive.

Ideally, you want some facility that will say, hey, data is available for you and you go ahead and then read the data.

Similarly, if you’re interested in knowing if files are being deleted or added to a directory, you don’t want to pull that directory all the time.

You want to get passes to notifications effectively saying, hey, data files have been added, so at that point you want to respond.

So, we export dispatch sources.

These are exactly what you would expect them to do.

They monitor these sources.

You register event handlers for them and when the event fires, you can do your handling.

So this makes your apps even more responsive.

Nowhere in your code now you have threads just are blocked waiting for things to happen, but rather the system tells you when things happen and you dispatch.

Once you have once you know that there is work, you dispatch to handle that block.

So let’s get in more details with dispatch sources.

We provide a single API and with the single API, you can actually monitor a variety of low level sources.

Now, you might be wondering if you’re familiar with our platform, we have run loop sources that you can register with the main thread or with any other thread.

And the nice thing with dispatch sources is the event handler can run on any queue.

And while the handler is running on this other queue, you are still monitoring the source.

So generally, with the run loop source what happens is you register a run loop source with a thread and since the main thread is readily available to you, we see most app developers just registering it with the main thread.

Now when the even fires, your thread, your main thread is busy handling that event handle block and therefore it’s not actually monitoring the sources or any other sources that you have registered with it.

So to get efficiency, you either spun off another thread to actually do the work and make the main thread go back to listening, so we are back to kind of square one where you are having to do with thread management explicitly.

But even in that first case if you want to improve the performance, you also have to be careful that while I’m doing the event handling, that same event might fire again.

So, you have to be careful about dispatching two of those event handlers at the same time that are running on two different threads.

So GCD provides even though GCD monitors these events while threads are while you are handling the event, we would not actually invoke your event handler while you’re doing it.

So we will wait for it and in a sense it is reentrant safe, so you can do critical actions if you want in your event handler knowing that that’s the only event handler for that event that is going to happen.

Also, you can suspend dispatch sources.

You are no longer interested in monitoring or handling the event for the time being.

You are doing other work, so you can suspend the sources and resume them at will, and GCD will monitor this event while you’re doing that.

So, let’s look at an example of how we would create a read source.

So, first you have socket.

You set the socket to nonblocking, and the idea here we will come back to why do we set it to nonblocking.

And then you set you create a source with dispatch_source_create.

You pass in the type of the source you’re interested, in this case, the read type and the socket.

So, this is the file descriptor that is backing the read source, and you’re passing a target queue.

This is the queue on which your event handler blocks will be enqueued when the event fires.

Then you set the event handler block with the dispatch_source_set_event_handler call, and this is where you would actually do the read syscall.

Now, this is where why we set the file descriptor to be nonblocking because we don’t want to wait on the read call.

So there could be cases where in a read my return you have no data and you want to just retry it.

So the subtle thing here is if read returns you error or EAGAIN for any other reason, you do not yourself have to a do while loop on that.

The event source is the one that’s actually going to drive that event handler for you, so it makes your event handling even much more simpler that way.

And dispatch sources are created in a suspended state, and partly we have this created in a suspended state so that you can set all these event handler on in other configurations that we’ll see later down the road.

Well, once you have set all these things, you resume the source and at this point your event your dispatch source is starting to monitor that file descriptor.

So, let’s look visually at what happens.

You have the main thread which is tied to the main queue.

You create a read source and set the target queue to a dispatch queue that you have created.

Well, data come along.

That causes the read source to fire and then the event handler gets enqueued on the dispatch queue.

The automatic thread comes along because it sees there is work to be done on this queue, reads off the data, clears the condition and the read source is available to fire again.

Now, you might be you might want to update your UI with the data that you have received.

So, you async a block on to the main queue and you can do that with the dispatch_async.

So it’s kind of like a nested dispatch_async we took from within your blocks.

So the read source that they set went back and is able to call, is able to fire again.

The automated thread goes away because it has finished the work and the UI gets finally updated.

So we have mentioned this in passing.

So, you can suspend a source and/or the event handling could be happening when the source is still firing and dispatch will coalesce the data for you in the background.

You do not have to do anything else.

And dispatch_source_get_data is the API whereby you can get the kind of like merged result of what happened during the time when you were not busy handling the event handler.

So, you don’t have to worry about, you know, how do we do the coalescing.

It’s very high performance so it’s OK not to handle the event.

We will do the right thing.

You do not have to feel obligated in any ways that you need to handle an event at a certain frequency or anything like that.

And dispatch sources are based on BSD queue, kqueue, so that means we do offer the facility to monitor quite of big variety of low level events.

So, let’s go through some of the sources types that we support.

You have already seen the read source.

And similarly, there is the write source.

It takes in the file descriptor to which you might want to write and the account there is telling you, when you call dispatch_source_get_data it will tell you how many bytes there is available for you to write.

We also provided a vnode source.

This is probably what you would use if you want to monitor the modifications to a directory structure or to a file.

And when you do get data on this one, it’s going to say what event actually triggered it, which means, hey, whether the directory a new file was added to a directory or was removed, you know, similar things like that.

We also provide the timer.

It doesn’t take in a handle because it doesn’t need to.

You just specify the interval at which you want this source to fire.

And if you have suspended the timer or you were busy handling the event for a long time, once you call, getData is going to tell you the number of intervals you missed since the last handling of the event.

And this dispatch other custom dispatch sources, and this just provides application-specific custom ways to do things.

You can look up the documentation.

I would imagine most people would not need to use it.

It’s just here for completeness purposes.

So, let’s look at the example of where you would set up a dispatch source timer.

You set up a timer, you give it an interval.

You want this timer to fire at that interval.

You set up the in this case, the target is the main queue, so the when this timer fires, the event handling block is going to get delivered on this main thread.

So, you want to suspend the timer, and in one instance you might imagine that you are looking at a progress bar and you are drawing the progress bar on the screen and you don’t want to update your UI every now and then and then suddenly you have a lot of work to do.

So you want to suspend the timer, and that’s fine because in the meantime we will continue to monitor how many seconds have passed.

So in this case, this is a timer for every second.

And then once you are ready, you resume the timer.

Well, at that point, the timer will deliver the handler to the main thread or to the main queue here and you would notice that we figured out that we there was four seconds for which we didn’t really handle this event, so you can get that data and update your UI accordingly.

And once you have handled the event, the timer source goes back and it’s ready for fire again.

Alright, so we have suspended sources, we have resume sources.

Now what happens, you’re not interested in monitoring the source anymore.

You’re not interested in the event anymore.

So there is source cancellation.

So, canceling of source effectively means that no further events would be delivered to you.

Now, note this asynchronous, which means it’s not preemptive in nature.

If you’re already handling an event, it would not stop that event from getting handled.

But no further events would be delivered to you.

Now cancellation handler is another block that you could set up your sources with, and this would get fired when the cancellation happens on the source.

This gives you a way to deallocate the resources you might have used when creating your source.

So in the instance when we are creating read sources, we pass in a file descriptor to do it, but you want to know when it is OK for me to close that file descriptor.

And because things in the dispatch world happen asynchronously, when you call dispatch_source_cancel, it doesn’t necessarily mean it has canceled the source immediately.

So this is a way for us to give you a notification that it’s OK to cancel your source, and that’s why it is kind of required for all the sources that takes in filedescriptor.

And note that cancellation handler is only delivered once, and that’s the last event that gets delivered and it gets delivered on the queue, on the target queue, that you have set up at creation time of the source.

Now if you have suspended a source and then you have canceled a source at a later point, so you suspended the source, you said, “I’m too busy handling other events,” and then later down the road, you’re like, “I don’t care about the source anymore.”

So, we would cancel the source and no events will be delivered but no events were delivered to you anyway because you had the source in a suspended state.

But we would not be able to deliver the cancellation handler until we resume the source, so make sure that you end up resuming the source and not just cancelling it.

So let’s look at an example of how we would cancel.

A very similar example of what you saw before.

We create the source, we passed in that file, the socket here, and the main goal here is how to handle the proper life cycle of this socket.

Well, you set up a cancel handler similar to how you would do on an event handler, and here in this block you see we are just closing the socket.

And in the event handler block when dispatch get data returns zero which means excuse me, this is the time when maybe the other side of the connection has closed it.

You get an EOF and which means it’s OK for you to close the connection.

Or you might decide that this is the time that, you know, you might want to reconnect.

So, in your cancellation handler, you would have code to reestablish that connection, but that’s how we would cancel the source.

And as was stressed before, this is how you set up and configure the source, remember to resume the source to actually start monitoring.

So, target queues.

So we have mentioned this, at creation time of sources you pass in target queues.

These are the queues where your event handler blocks are going to get delivered.

Now, for most of you, if you’re using you’re new to dispatch, you’re using dispatch APIs for doing asynchronization, this is good enough.

When you create sources for which you want to respond, you can do with what ever we have talked so far.

And now, we are going into Jedi training which is you can even set the target queues of these sources.

So, not only at creation time can you sort of give it passing a target queue.

While event handles are being handled, you might dynamically say, I don’t want my event handlers to be delivered on this queue anymore.

I want them to be delivered on other queues.

So you can do that with this API.

Well, not only can you change the target queues of sources, you can change the target queues of queues and this is where it you know, it gets a little more confusing but if you follow along and I have some visualizations to follow on this, you’ll probably get more of what GCD is capable of doing.

So Daniel mentioned this in passing.

He mentioned that you have global queues and in the global queues is where the real work gets done.

So if you have the semantic model of where we had a queue and an automatic thread came in and that was executing those blocks on that thread, and that’s a good semantic model to have.

But your blocks finally get executed on to the global queues.

Now when you set that target queue of a dispatch queue, it’s you can imagine that’s effectively saying, hey, if I have a subqueue and I set the target queue of this subqueue, then the blocks running on the subqueue is effectively being run by that target queue.

Now, we never passed in a target queue when we created our dispatched queue.

We just passed in a name and that was it.

Because we set up the target queue of this dispatch queues that we created as the global default priority queue.

So, the target queue of the queues that you create are by default going to get run on the default priority queue.

So, let’s quickly look at a code example and then we will move to a visualization which will probably make it simpler to understand.

So here, you would create a queue, probably familiar to many of you from the first session.

And this is how you would get one of those background queues.

So, as I mentioned, when you originally create this queue, its target queue is the default queue, and you might want to change it to the low priority queue.

So, in this case, we get the low priority queue, and then we set the target queue to that low priority queue.

So now, overall in our system, we have four queues the main queue where the main queue is being drained by the main thread.

So, this is the queue you want to submit works on which you want to update the UI.

And in addition, we have these three global queues, the low priority one, the default, and the high priority, which means the blocks running on this low priority queue are drained by low priority threads.

The default queue is drained by default priority threads.

And the high priority queue is drained by high priority threads.

Say we have a source A and we set the target queue to be opt out at the main queue.

So blocks are events from these sources.

When events fire for this source, the event handling block is going to get delivered on the main queue.

Now, you create a dispatch_queue, and by default, the target queue of this queue becomes that of the default priority.

You create another source, in this case, source B, and you set the target queue of source B to that queue that you have created.

What it means is when events for this source fires, the event handling blocks would ultimately get run by the default priority queue.

You can change the target queue of your queue to the low priority queue using the example that we saw before.

What this changes is now your blocks are going to get run, the event handling blocks at source B is going get run by low priority threads.

Now, before we change the target queue of source A, source A and source B could fire independent of each other, and the event handling blocks would be handled independent of each other, one on the main thread and one drained by the low priority threads.

If you set the target queue of source A to this intermediate queue that you have created, then both of these event handling blocks get channeled through this queue.

And because of the FIFO nature of these queues, only one event handling block is going to get executed.

So, now, from this parallel or simultaneous ways of handling both events, now, you have only one event handling block that is going to get delivered.

So, target queues are quite powerful.

You can create all these hierarchies and you can solve complicated ordering problems with them.

But don’t go too crazy because there is no way for us to do detection of loops.

And in your particular instance, you might find, oh, this is quite simple, but try it I mean, believe us, we tried it, and we’ve we couldn’t do it very well.

So, it’s difficult to do loop detection in these kinds of things.

Now, when you set the target queue of a queue or when you set the target queue of the subqueue to a queue, you can also guarantee some kind of a block ordering, and what do I mean by that?

Imagine you have blocks that were getting executed on your subqueue, and you decided to change the target queue of this subqueue to a parent queue.

That means at that point, all the blocks within the subqueue that were enqueued to be run kind of gets moved as a super block containing all the subqueues which gets run on the parent queue of the target queue.

So here, hopefully, that visualization will help.

You have a queue A, again by default, it had set the target queue to that of the default priority queue.

You have queue B and queue C.

All the blocks are running concurrently.

Well, not that the blocks in queue B are still running serially, but blocks of queue A, blocks from within queue B and blocks from within queue C could be executing at the same time.

So, you have queue B and queue C.

So, you change the target queues of queue B and queue C to that of queue A.

What you’ve done with this from the simultaneous execution of blocks in queue B and queue C, since you channeled them to queue A, now only one block is going to run at one time because of the FIFO nature.

Now, you might imagine why do I stack these queues, what’s the point of doing this kind of arbitrary hierarchy and stuff?

One way to look at it is say, for instance, you are writing a transaction system, and for which you have different types of operation, an update, a delete, and things like that.

But obviously, these operations are working on the same data structure.

So, you have seen previously that you can use queues as locking mechanism.

So, you have a queue protecting that data structure.

But you also want partial ordering between the types of operation that are coming in.

So you have a bunch of update operations, a bunch of delete operations.

So, it is important for you to kind of order the update operations among all the update operations that there are, and also within the delete operations.

So, you have a queue to do the serial ordering between the different types of operation.

You also get the flexibility to manage these operations.

So, you want to suspend all delete operations for the time being, and you can actually suspend the queue as we’ll see later.

So, it gives you a lot of flexibility in your data structure in the ways you want it to be accessed, and you can use the in-build flexibles, you have the queue into your algorithms if you want to.

So, we have talked about sources, we have talked about queues.

And that’s the main big part of GCD.

But GCD still has is a little bit bigger than just these two things.

And as you will see, as I just said, you know, the queues and sources are big part of it, but there is other stuff in GCD, and these are all available to you.

But we are going to concentrate on the common operations that you can do on this GCD objects.

Well, one of these is the retain and release.

You have seen that you can do dispatch_retain and dispatch_release to do your memory management of these objects, very similar to what you would do on CF objects.

With dispatch calls, dispatch actually retains the queues that you’re synching on, which means when you call dispatch_async or you call dispatch_sync and you pass in a queue, we retain the queue for you, so you do not have to explicitly do memory management for those.

In other words, if there is a dispatch API that requires you to pass a dispatch object to it, we will do the proper memory management for you, you do not have to know this.

Let’s talk a little bit about managing object lifetime and why is this important.

The issue here is you have a function A, and from within the body of function A, you call dispatch_async, and you dispatch you async a block.

When function A returns, at some point later, that block is going to get executed asynchronously.

So, whatever variables you’ve captured within the block in this within the function body, you have to make sure that they have proper lifetime.

So, that is where this lifetime management comes into play because you’re dealing with this asynchronous model of piece of code working on data that is kind of beyond the functional scope.

So, you what you have to do therefore is make sure that you retained the objects within these blocks.

So, to extend the lifetime beyond the functional scope, you have to make sure these objects are retained so that when at a later point when dispatch tries to execute them, these objects are still valid.

The good news or the very good news for most of you is most of the time you’re dealing with Objective-C objects, and the block runtime will do this automatically for you, so you don’t have to think about it too much.

But other objects, so for instance and we went over this in great depth in the first session, so if you want to take a look at it or want to come back to the next one, you will see how we do retaining of objects that we need to.

The idea here is that if you’re passing CF objects, blocks would not be very helpful in managing that lifetime and you have to explicitly retain and release them.

We’ve kind of already seen this.

We can resuspend and resume dispatched objects.

So, dispatch sources, suspension is very clear, it means your event handling is suspended, event handler blocks are not going to get enqueued anymore.

For queues, suspending a queue effectively means blocks enqueued on that queue are not going to get run.

One thing to note here again is sources are created in a suspended state.

Now, you might be getting tired of this same thing that I’m repeating.

One of the things that we see is people creates sources and if they forget to resume it, and they blame Grand Central for not monitoring those sources.

So, just make sure that you resume the source to actually start monitoring.

Also, when you call suspension on a queue, it is asynchronous.

So, it is executing blocks one at a time, it’s not going to preempt the executing block from running, it’s going to finish that and then going to suspend.

But imagine you have a source, and you set the target queue of that source to fire on a queue.

You can reliably suspend this source by calling dispatch_suspend on that source from the event handler.

Because when the event handler is firing, that means that is the only block that is working on the source.

Even though the event is firing and GCD is maintaining the state and coalescing the state as the event is firing at the background, that is the only user block that is getting executed.

So, you can suspend the source from the block, and this will reliable suspend the dispatched source.

So, we try to be as close to CF objects in this case.

And we also provide application context to dispatch sources.

So, we have set and get specifiers.

So, we can set a context on any of these dispatched sources.

And we also provide finalizer callbacks.

The idea is that when the dispatched source is destroyed, you cannot reliable actually know what the state is.

So that object is already destroyed, and therefore, you can only call a function pointer at that point.

So, that’s about it.

You can get more information in the Developer Forums.

There’s a good GCD guide that you can look into.

You can ask Michael.

And you can even look at the code yourself.

It’s open source, so feel free to use that resource.

There is a repeat session for this that’s happening today at 2 o’clock.

Apple, Inc. AAPL
1 Infinite Loop Cupertino CA 95014 US