Fixing Memory Issues 

Session 410 WWDC 2013

Using memory responsibly can be the key to stability and ensuring a great user experience. Get a look at new memory tools and learn helpful patterns for dealing with common memory issues specific to reference counting in Objective-C.

Good afternoon and welcome to fixing memory issues.

My name is Kate Stone [phonetic], and I’m responsible, among other things, for managing the team that develops the Instruments product that hopefully you all know and love and will learn a lot more about in this session.

But, of course, the focus of our session is not Instruments, it’s on your application.

So glad to see so many of you here today because our goal is to make your application the best that it can possibly be.

We want applications that are robust, that are fast, that don’t hog too many system resources and work really well with other applications on the system.

So with that in mind let’s start thinking about what could possibly go wrong.

Your application, of course it occupies memory.

From the very beginning an application that’s up and running, it’s loaded into memory, it starts executing, starts loading data.

It starts loading more and more and more data.

And there are a couple of potential side effects of this.

If you get too much you start seeing a poor user experience.

It could mean long load times.

It could mean you’re trying to get too much up and running before the user sees something meaningful.

It could mean you’re using so many system resources that you wind up swapping on an OS X system and slowing the system down.

It could mean that when users try to switch to their other applications that they find that they’ve been forced out of memory by your application.

So there are a lot of potential down sides, and it really behooves us to keep things under control.

Of course, the situation could get worse.

It could be the situation that you’re using so much memory, so many resources on the system, that you were forcibly terminated.

This can happen in a variety of ways.

We’ll discuss all of them.

But in particular you should keep in mind that your application being terminated aggressively while the user is watching is only the worst possible outcome.

It’s also possible that the user switches away from your application, and because you’re using too much memory the system needs to clean that up, it discards your application, and it’s slow to come back to your app the next time.

And that’s another form of poor user experience.

So, again, memory is really critical in this sense.

But it’s critical in another sense as well.

Poor management of memory can lead to bugs that result in the immediate crash of your application even though your resources may not be excessive.

So we’re going to focus on all of these problems throughout this session.

Specifically, if you look at our agenda, we’re going to talk a little bit about app memory in general.

What does it look like?

How can you think about it, and what kinds of tools do we have to help you understand what’s going on?

We’re going to talk about the Heap, because every memory discussion discusses the Heap and, in fact, that’s an area where not only do you have a lot of resource uses, you have keys to finding other areas where there’s memory being occupied.

And we’ll talk about how that works.

I’ll hand it over to one of my engineers, and we’ll move on to talking about Objective-C, challenges on that front including retained releases a pattern.

And, lastly, how to be a really good citizen in the environment.

So an overview of application memory.

Thankfully this release, and doubtless you’ve seen this numerous times before, we now have the debug gauges in Xcode.

The debug gauges give you access, among other things, to memory at a glance.

So there’s a gauge that you can move to, click on, and you’ll get an additional report that shows you more detail about your application.

We’ve tried to break this down to one number that’s most likely to be meaningful to you.

In this case you can think of it as the footprint of your application.

That one number gives you a good indication as you’re running your app because it’s always present in the debugger of just what you’re doing.

If you’re starting to consume more resources over time, if something catastrophic does occur just what point that occurred at.

So you’ve got a good indicator when you might need to go and use Instruments.

And, indeed, there’s a handy button right there that if you see a trend you don’t like we can go directly to Instruments from the debug gauge.

You need to understand that this memory number is just one number, and there are all kinds of ways to measure memory as we’ll discuss again in a bit.

But Instruments is typically traditionally focused on the Heap.

And you may have seen situations where your application was summarily killed despite the fact that your Heap was relatively small.

It’s important to know that there’s really a lot of other memory involved in your application that you may not have thought about, may not have had to worry about or may not have had the tools to understand before that can take the form of code, it could be images, media.

It could be a lot of different things that you haven’t been able to see before.

So the big question is how do you get to all of this information?

Lastly, again, this point that your measurement will tell you different numbers depending on the tool you use, the strategy that the tool uses to find that information and really what you’re looking for.

So if you’re looking for one number that sums up everything about your application, in some sense you’re trying to oversimplify what’s really a fairly complex set of subsystems operating.

So you will see that the gauge provides you one set of numbers.

Again, it’s what we think is the most meaningful number.

But there are other numbers that we can explore, and as we go through Instruments you’ll see some of those as well.

So it sounds like the right time to dive on in and have a look at this workflow.

I’m just going to bring up a project here.

And this project is one that you’re probably familiar with if not in source code because we don’t make it available, then because you’ve used it every day at this conference.

It’s the WWDC app that you have on your devices.

Or at least it’s an early version of it with a few issues, maybe some of them having been introduced by us for purposes that will become clear on the stage.

So let’s go ahead and run that application.

So it’s been deployed to my device.

It’s up and running.

And we’ll see that the gauges are providing me feedback here about how much CPU time I’m using, how much memory I’m using.

And so as I exercise the app I’m going to drill into a particular session.

I’m going to go ahead and look at maps, look at news, videos.

You can’t see any of this but, again, the workflow should be familiar to you.

And so I’ve come back to the initial state I was in, and I’ve seen rather a lot of growth over that time.

So maybe I want to have a look at that memory curve.

And it doesn’t look spectacular.

It’s kind of going the wrong direction here.

There’s enough information here to know that there’s something I should investigate, but not enough information to tell me what exactly I should do.

That’s the time that we might go to profile in Instruments.

And so by simply clicking here I get a prompt, and I have two options at this point.

I can choose to stop my debug session and hand this existing running application off to Instruments to investigate.

I will have missed the opportunity then to measure a lot of important information about the allocations that have already occurred.

So when you’re looking at memory that’s rarely the right option.

It’s possible there may be things that you could learn using that technique, but for our purposes no.

We’re going to ask to stop the debug session, launch a new copy of the application inside Instruments.

And we may see here what you may very well have run into already if you’ve been experimenting with the seed, that in the initial seed the handoff is not always graceful.

If you run into problems you can stop the application and simply start re-recording in Instruments.

Obviously this will be corrected.

So we’ve got our application up and running under Instruments.

We’re getting recordings here of what’s going on in terms of memory allocation.

And we’re seeing a nice allocations curve.

I mentioned before that typically Instruments is shown as Heap information in allocations.

Well, if we look at some of these numbers we would expect then maybe to see numbers that are smaller than the numbers that are being reported by the gauge.

That’s no longer the case.

My total live bytes being reported here, 57 meg, is actually even more than the number reported in gauges.

And that’s because allocations has been enhanced to show a lot more information than we’ve been able to see in the past.

In particular it’s not just Heap allocations but all virtual memory regions that have been allocated, mapped into memory while your application was running.

So if you look here specifically we actually have a filter that allows us to see a blended view of both Heap allocations and VM allocations.

Or we can focus on just the heap which is really just 3.6 meg in this case.

Or just the VM regions which accounts for rather a lot of memory in my application, memory that you haven’t been able to see in the past.

Well, that’s interesting, what kinds of things have we got here?

Well, if we sort by live bytes we’ll see that we have mapped files, and I can dive in and see specifically which files, zoom back out, which files have been mapped into my application, when they were mapped in and get additional information.

In fact, I can find a particular file of interest and find the call stack that was responsible.

What was it in my code that caused this to be mapped in?


But in this particular case what caught my eye wasn’t the mapped files.

It was all of this image IO.

So I’ve got image IO going on, and as I exercise the application in the same fashion that I exercised it a moment ago, we’ll see that, in fact, image IO turns out to be a pretty substantial chunk of what my application is doing.

In fact, it’s dwarfing the Heap memory at this point.

So optimizing for my Heap is maybe not the most important thing for me to consider.

Maybe I need to understand this a little better.

And a lot of the tools that we’ve used before help me understand this.

So if I were to go into my call trees view, the call trees view will show me, again, down which paths the allocations are occurring.

If I search for image IO work here I can see that there’s core animation-related things happening that are resulting in memory mapped operations.

And so I can actually drill down and see the call path that led to this.

It’s interesting because I will see there are actually also malloc operations operating on the Heap in the same vicinity.

In fact, if I look at this line, we’ll go up even one more level here, 7.83 meg is being called down this path somewhere in code.

7.82 meg of that looks like it’s mapped memory, memory that would show up as VM regions that I couldn’t see before.

So I’d have this tiny amount of memory on the Heap that’s allocated down this same path and represents the same work that’s allocating all of these images.

That’s what we’d like to be able to queue in on and understand a lot better in order to control the behavior of our application.

So these are some interesting new tools that we have.

Of course we have some familiar tools as well.

I’m going to dig into the library and bring out one other familiar favorite, VM Tracker, and record a little bit about the current state of my application from VM Tracker.

The reason I bring this up we’ll dive into in a moment, but virtual memory is an interesting slippery beast.

A region can represent a lot of memory space but have only portions of that memory actually in active use in physical RAM on your device.

And the VM Tracker is one way that you can dig in and understand this.

So the VM Tracker in this case is showing me information like, okay, there’s this image IO that’s going on.

Of that image IO how much of it is virtual, how much of it is dirty and how much of it is resident?

These are incredibly important queues that we really need to understand a little better in order to see the full picture of what’s going on in my applications memory.

I’d also like to point out that if you’ve used the Instruments from sort of a casual perspective, you go in and you look at the default presentations, you haven’t really tried tuning the way you look at things, you might want to try clicking on the left side of the navigation bar here.

Because this isn’t just about navigation in the bread crumb trail, it’s about fundamentally different views for the instruments that we have.

And so for the VM Tracker one of those handy views is the region’s map that allows me to see where things are laid out in memory.

So I could actually understand my virtual memory address range, which things are adjacent to which other things in case I’m overriding the end of some buffer, et cetera.

There’s a lot of good information in here.

We won’t cover it all in this session sadly, but we are going to focus on some of the things that are particularly new this time around.

[ Pause ]

So let’s switch back to slides.

We talked about the fact that virtual memory is now more fully represented in allocations.

And we have a lot of familiar tools and techniques that we can apply.

There’s one that may not be quite so obvious, though, this notion that there’s an efficient alternative to running allocations on your application as is.

If you’ve ever tried using Allocations on a particularly large app you’ll note that it’s tracking a lot of information, millions upon millions of allocations and deallocations, and bookkeeping for that winds up actually taking a lot of memory and a lot of time.

So if you’ve got an application that you’ve never been able to use Allocations on before there’s a new interesting technique.

If you open the configuration panel for Allocations and move to only VM Allocation tracking, you will lose your ability to see the fine grained information on the Heap, but you can still see the big picture in terms of VM, and that could be a powerful tool for applications that previously there was no way to get any insight into.

Because this does give us insight into some very interesting things.

It gives us an idea of who mapped files into memory, who is responsible for contributing to my footprint in a way that the Heap never could.

And, again, for page level statistics, so beyond the VM region to an individual page, a concept we’ll talk about in a moment, there’s the VM Tracker which can give me more depth.

So, sorry, built.

Virtual memory.

We’ve talked a lot about virtual memory.

I pointed out concepts like dirty memory, resident memory, these are things that may or may not be familiar so let’s go through them briefly.

The notion of virtual memory versus resident memory.

My virtual memory isn’t a logical address space from address 0 to the top of logical memory, and every single process has its own virtual memory region.

Process A may be thinking it has memory at address 0 and process B thinks the same thing.

And obviously they have distinct chunks of memory.

So this is a nice logical concept from an application developer’s point of view.

You can pretend that you own all of RAM, but the reality is a little bit more complicated than that.

All memory regions that are reserved are allocated on 4K page boundaries and occupy a fixed number of pages so you’ll never share a page with another virtual memory region.

So the example we have up here you’ll see that we have a region that consists of 2 pages, 8K, another region that consists of 3 pages, 12K, and that’s representing my virtual address space here.

But nothing’s actually in RAM yet.

This is all just from my application’s point of view a simplistic way of looking at the world.

What happens is that on the first use, whether I read from a chunk of memory or write to a piece of memory it winds up being mapped into physical memory.

We find a page that’s not used in physical memory it’s unlikely to be the first page, there’s probably an operating system up and running at this point, but it finds a page that’s unused and it maps it to a page of your application.

Same thing happens again.

I find some other page that I touch, and these can be completely disjoint, and these result in physical memory coming in.

What happens is we describe these as having become resident.

So it’s still virtual memory, we still talk about it as virtual memory, but only a portion of it is actually resident in physical RAM at any given time.

Part of the reason that I like to draw the distinction between these two is because while physical memory is more likely to be your constraint, we’ve tried to illustrate this with the shorter bar, it’s not always the case.

It’s possible in an application that you will try to map in too many media resources in case you ever need them.

And so despite the fact that you’ve never read a single one of them, you’ve exhausted the amount of virtual memory that the operating system is going to allow you.

In particular on iOS you may find that you don’t have much loaded, but if you exhaust the virtual memory space that we allow for you we will simply kill your application outright.

So it’s possible to run into that limit.

It is much more likely instead that I’ll run into limits in terms of physical RAM.

The notion of clean versus dirty comes up next.

What’s dirty memory represent?

Well, clean pages are pages that we can discard at will and recreate.

What kinds of pages can we do that with?

Well, if we’ve loaded some code from Flash or from a physical disk drive we’ve loaded that in, we know where it came from.

If we need that memory back we can throw it away.

We can always reload it from disk.

That memory is still considered clean because it can be discarded.

And this is true of OS X and iOS.

On the other hand as soon as I touch a page and actually modify its contents so that there’s no longer essentially a backup copy that we can readily get we consider that memory dirty.

And so basically everything on your Heap, your Stacks, global variables, all of this is dirty RAM.

It’s RAM that we have to keep in memory sorry, information that we have to keep in memory in order for you to be able to continue executing.

That’s absolutely critical for your sanity.

So what happens when we start running low on memory is we’ll start discarding pages that are clean.

On OS X then we will also worry about what we can do to swap things out.

We’ll take dirty pages and write them to disk so we do have a backup copy and we can reuse that memory.

At this point your system will start to slow as it needs to page memory in and out.

But on iOS the situation is much more drastic.

If there’s dirty memory and we haven’t got anywhere to put it because we don’t write it to Flash, too expensive for a variety of reasons, we will simply terminate your application.

So you absolutely need to keep your dirty memory under control especially on iOS.

So this process of swapping gives us potential but, again, for the vast majority of you it’s not available.

The last concept is the notion of private memory versus shared memory.

When I create a memory region I can give it a name, and by giving it a name another process can actually use the same physical memory and map it into its virtual address space.

Well, as it turns out in a lot of cases you will wind up instead with memory that is automatically shared, implicitly given a name because it’s mapped to a file.

And so as soon as I map a file if somebody maps the same file we’re now sharing that memory.

So consider an example where I write an application.

The first thing that happens is a piece of code gets paged in, okay?

We hit the code off disk, it’s now mapped into my address space.

I then go and allocate something on the Heap, and so I need a page that backs some of the Heap.

And now we’ve got one process up and running.

But now I launch another instance of the same process.

It’s got its own address space.

So it has a completely separate address space that they happen to potentially align here, they may not.

And now it needs to load code.

Well, because the code is coming from a file it’s actually able to map the same chunk of physical memory into its virtual address space.

But then it goes to put something on the Heap, and because this is logically a separate instance its data is kept independently, and this is separate.

This is what we call private memory.

So often you will see a focus on your private memory, memory that is not potentially shared with anyone else because all the frameworks are shared.

It’s generally not considered reasonable to account against your process what you’re using from the frameworks.

And instead we want to focus on the private dirty memory, and that’s largely what we show you with the Xcode gauges.

So we’re going to talk about Heap memory.

What is the Heap?

Probably a familiar concept for most of you, but if it’s not, any time you call malloc you are allocating a chunk of memory, and it’s potentially quite small.

This is often used for fine grained things, a dozen bytes or more.

But it can also be used for fairly large things, and it’s in this managed area.

You may not have called malloc, but if you’ve called alloc or new on an Objective-C object under the covers that’s what it’s doing.

Or if you use the new operator on a C++ object that’s what it’s been doing.

And so you wind up with these malloc zones.

The malloc memory is actually backed by virtual memory, so there’s a virtual memory zone that the malloc subsystem has created for you, and it’s doing all the bookkeeping to try to reuse space and use space efficiently within that malloc region.

So if you actually look you will see that there is this VM malloc region lying around that represents the backing store for all of this.

And typically things that you keep on the heap you will often have reference counted so that you can do fairly sophisticated memory management.

It takes care of a lot of interesting bookkeeping for you.

You may also wind up seeing it, of course, allocated implicitly by other code on your behalf.

So when I’m looking at the heap things that I might want to keep in mind are that there are some types that are more expensive than others.

VM is about bytes.

I can look at a VM region and say this is a couple of megabytes, that’s actually relatively large, but in the Heap typically, again, things are going to be 16 bytes, 30 bytes, it’s going to be some small amount of memory that’s reserved.

And so it’s the aggregation of a large number of these things that gets interesting.

Why is that important?

It’s important because in practice a small object can have a large graph of other objects that it references.

So let’s say we have, for example here, in our world a view of some kind.

It actually contains that small chunk of memory that’s actually allocated for us here, 96 bytes in this case.

It has a layer pointer, has view delegate pointers, has pointers to all kinds of other objects.

And these objects themselves occupy space.

The layer object in this case another 32 bytes.

And this may not seem like it’s going to add up in a hurry except that some of those objects manage whole VM regions behind the scenes.

So a CA layer needs to keep its bitmap data somewhere, and as it turns out that data isn’t always on the Heap.

In this case we’ve got a VM region that could be megabytes and megabytes of pixel data.

And so it behooves you to understand that, yes, what I’m looking for are these objects that keep large graphs of other objects behind the scenes.

There are the obvious ones.

If I look for an NSSet or an NSDictionary I’ll see that there’s a relatively small amount of memory attributed to it by allocations.

But really it’s holding onto we know potentially a large collection of objects.

And then there’s the less obvious containers, again, UIView, view controllers or NSImage that represent a lot of information.

So as I showed you earlier in the call graph where I found one place that creates both an object and a VM region this is what happens typically.

And it’s when you finally de- reference the object that the VM region is deallocated on your behalf.

So what can I do to investigate?

Well, as it turns out there are a couple of ways to think about this.

There are my classes.

My classes is a subset.

They’re probably things that hold onto large graphs.

And so if you want to just see just your classes you should probably prefix them consistently.

So in this app we have everything prefixed with WWDC.

And that means that I can go ahead and just bring up the allocations UI, go ahead and click in the filter in the upper right corner and type my prefix.

So once we’ve got that prefix typed in you’ll see that we’re filtering down to just the object whose category here uses that prefix.

And category is the way we divide all of the allocations into some reasonably observable group of objects.

And in this case it’s based on the class of the object.

So if I’ve used a consistent prefix it’s easy.

But we categorize objects by things other than just Objective-C classes.

Specifically we’re much, much better in this release at C++ objects.

If you have a virtual C++ class you’ll find that we categorize those by name as well.

We also find other things that have V tables.

We can find dispatch queues.

We can find XPC-related types.

Just try searching for dispatch or XPC.

We can also find blocks that have escaped the scope and been allocated on the Heap.

Look for NS malloc block to find those.

[ Pause ]

So when we look at Heap growth and it’s going the wrong direction and we could sit down and look at objects all day long, there are a few kinds of problems to keep in mind.

One of them is the potential for leaked memory.

It’s possible that you have allocated an object, you’ve finished using it, you in fact no longer have any references to it.

And this is much easier to do without ARC than it is with ARC.

It’s still possible.

If you literally cannot get to it anymore because there is no longer a pointer from any active object that you reference we consider this memory leaked.

And Daniel will show you a little bit later about how exactly you can go about tracking down this leaked memory.

But there’s some other categories that are much trickier to understand.

Abandoned memory, memory I’ve allocated, I have a legitimate path to get to it, but as it turns out there’s never going to be a single line of code that actually follows that path again.

I stuck it in the global variable during startup, and that startup code is the only code that ever references it.

That memory has been abandoned.

No leak detection tool on the planet will find it for you, but it’s still important that you understand that that memory exists and is cluttering up your user’s environment.

There’s also cached memory.

If you have a cache that you stick things in you may find that you open a document, put the document reference in the cache, close the document, and it sticks around forever.

It’s really efficient to go and reopen that document now, but it’s a shame if the user really intends to do a lot of other things and it’s cluttering up the world.

And in this place maybe the right thing to do is consider using NS Cache to manage your chaches so that it can handle memory for you.

But we’re going to talk about tracking some of this down using a technique called generational analysis.

You may have seen this before.

Previously we referred to it as a Heap shot.

But given that we cover more than Heap now we’ve renamed the facility.

It’s about following these steps to find our problems.

We reach a steady state.

We launch our application.

We then do something that allocates memory, open a document and then get back to that steady state, close the document and repeat that series of steps.

And along the way we may find something interesting.

Specifically if I repeat this process over and over again and every time memory winds up accruing, then I’ve got some form of abandoned or over cached or leaked memory going on.

So visually steady state, some sort of intermediate state with the document open, back to the original state.

Now, you may find that there’s some warmup cost here.

We’re loading code that hasn’t been used before.

There’s some basic data structures being set up.

Some level of warmup cost is to be expected.

But it’s on the repeated use of this that we wind up finding waste.

[ Pause ]

If you find that you are going through this repeated cycle and you’re not seeing any waste, you’re getting a net 0, you may find that you have free memory from a heap perspective that isn’t actually free from a virtual memory perspective.

How does that happen?

Well, it’s something we refer to as Heap fragmentation.

Specifically I have a situation where I have a variety of pages, and every page has exactly one object that I keep referencing on it.

Those pages, therefore, are still kept in, they’re in physical memory.

And I’m now using 32 bytes of some sort of legitimate data occupying 4K of memory.

So what do I do about this?

How do I understand this situation, and how did I get there in the first place?

Well, here’s how it happens.

You go ahead and get a malloc VM region.

This is created for you by the malloc subsystem in order to allocate an object.

I then allocate a bunch of objects and I’ve filled this page.

Repeat. So we allocate a bunch more VM regions.

We fill them with a bunch more objects, and now we’re in a perfectly reasonable situation where we’re utilizing all that memory.

Now we go and we free most of those objects.

And unfortunately what happens is we’re left with objects in the locations where they were allocated, pinned there occupying memory.

So if you’ve got a process that allocates a lot of memory and then releases most of it you may be fragmenting memory.

You may not see the memory coming back that you would expect from looking at the Heap.

The biggest thing you can do about this is avoid a situation where you have these extreme peaks and valleys.

So go into Allocations.

This is the place where the Heap Allocation tool, if you’re just filtering for Heap only and graphing the Heap, can show you precisely how much Heap you’re using.

So we’re seeing this graph with peaks and valleys.

And that is indicative of a potential problem.

You may never get back the virtual memory region that is resident from that high water mark.

Most of the time you’ll get back a significant amount.

But be careful.

Your ideal graph looks a lot more like this, relatively small increases and decreases over time to the extent that you can manage it.

Of course your friend in this is auto release.

Keep in mind that auto release is sort of the best tool you can possibly use here.

And what you need to do is make sure that you give the auto release pool a chance to drain.

You either drain it yourself in the course of a long loop that’s doing a lot of operations, or if you’re doing something on a dispatch queue you may find it’s valuable to drain it.

You need to be careful, as always, that auto released objects aren’t something that you’re going to reference after the auto release pool has been drained.

And with that I’m going to turn it over to my engineer, Daniel Delwood, who is going to dive into more of the mysteries of Objective-C, retain release and other goodness.

Thank you, Daniel.

[ Applause ]

Thank you, Kate.

So Kate went over virtual memory from a high level and then zoomed into Heap memory.

And now we’re going to zoom in even further and talk about Objective-C, your objects and the management schemes that you can employ.

So first of all let’s start with a brief review of retain release.

Objective-C uses as a reference counting ownership model.

And this means that when you create an object the object starts out with a reference count of 1.

So there’s two main primitives, retain which says I want to express ownership over this, it bumps the retain count and release which releases that ownership and decrements are a retain count by 1.

Whenever the account drops to 0 the object is freed back to the malloc subsystem.

Now, the third primitive there is auto release, and it’s important to remember that this is just a delayed release, and it’s very useful for things like returning objects from functions.

Just remember, though, that the auto release will issue that delayed release whenever the current auto release ends.

The rules are established, they’re easy to learn, and here’s actually a good reference document, the advanced memory management guide.

I’d recommend book marking it.

It’s a great place to go back to when you’re debugging issues.

The thing to remember here, though about Objective-C’s ownership model is that it’s deterministic and simple to learn.

Most importantly, though, it’s fast.

So what can go wrong with this, right?

We’re talking about fixing memory issues.

Well, it’s tedious.

Writing retain and release is very error prone when you’re writing all of this code, and more code really is more bugs.

And so if you write too many retains you’re going to get objects that may not be referenced by anything else in your code but still have non 0 retain counts.

Now, if you release something too many times it will quickly lead to crashes.

Now, the third problem that’s common with this is retain cycles.

And this is more of a structural problem, more of a behavioral problem.

And it’s because you could have objects that reference each other, retain each other, and if they’re not referenced by anything else in your application they’re still going to leak, and you need to find some way of breaking that cycle.

Well, luckily for the tedium it’s gone now.

We have automatic reference counting.

And the compiler is really great about doing those retains and releases for us.

So it’s compiler assisted convention enforcement.

It knows that whole document I just told you about.

It knows how to do the retain and release perfectly.

But it’s still important that you know how that convention works under the covers.

Because whether you’re running ARC or not, when you’re debugging issues you need to be able to understand all of the data, understand what retain and release means in your object history.

Now, automatic reference counting doesn’t solve retain cycles on its own because that’s not a mechanics issue, that’s a design and behavior issue.

Luckily, though, it does provide some very useful tools and keywords to help fix these including things like zeroing weak references where you can mark one of the references weak and the whole cycle will go away.

So let’s talk about then what the common problems are under ARC and focus on those more behavioral structural issues.

Well, the first one is memory growth.

And memory growth can really come in two different forms.

The first like I talked about is retain cycles.

And the second is abandoned objects.

So for retain cycles the leaks instrument actually provides a very nice graphical display of the cycle so you can easily understand what the cause is and go in and find out where you need to use the weak keyword and fix it.

As for abandoned objects, what Kate was talking about with the generational analysis is definitely the best way to go.

The second thing that can happen in ARC is messaging the deallocated objects.

Now, you might say how is this happening if I’m not issuing too many releases?

Well, I’ll get to that in a little bit.

But the problem with messaging these objects is it’s very undefined and nondeterministic behavior.

You might way, well, it always crashes for me, but it really depends on what happens to that object once you free it.

If you return the memory to the malloc system it doesn’t have to clear it, it doesn’t have to reuse it.

And it may still appear to be an object, it may still respond to messages and you won’t get a crash some of the time.

When you do get crashes, though, things like Objective-C message send, store strong or it does not respond to selector are good indicators that this is the sort of problem that you have.

So how do we fix that?

What tools do we have?

Well, the Zombies template is a great way to do this, and it uses a very age old debugging environment variable exposed by foundation.

And that’s Zombie enabled equals 1.

And what this does is instead of objects being deallocated when they’re freed or when their retain counts goes to 0, they’re changed into Zombie objects.

So that whenever you reference them later and use them and send them a message they crash.

Very deterministic behavior, and this is what we want when we’re debugging issues.

So, again, it’s deterministic because the memory isn’t unchanged or reused.

And I want to also point out that every Zombie object that you leave in your process is technically also a leak.

So you really don’t want to use this sort of debugging at the same time that you’re looking for leaks in your code.

An important note, though, previously this was only available for iOS on the iOS Simulator.

It’s now available for iOS 7 devices.

[ Applause ]

So with that I want to show you a quick demo.

So I have here the same WWDC app, and I was noticing an intermittent crash when I was dealing with bad network conditions.

Well, I can actually reproduce this a lot easier just by messing up a URL, and you’ll see what it turns out to do.

So I’m going to go ahead and run this on my phone.

[ Pause ]

So the app comes up.

You’ll have to trust me it’s looking very nice on iOS 7.

And immediately we see this sort of thing.

Exec bad access, crash, and if we zoom in here right in the debugger ObjC underscore release.

So it looks like we’ve got probably a memory management issue.

So I’m just going to use the profile action from the run menu, and goes ahead builds, pops open instruments, and I can go ahead and select the Zombies template for my device.

So, again, it launches, and we get this graph of allocations as we’d expect.

Alright, so instruments has detected that I have message to Zombie, and I can get some more information just by hitting the arrow.

So the first thing you’ll notice here is that my retain release history looks a lot different than it did in Xcode 4.

And that’s because what Instruments is doing is showing me a pairing view of retains and releases.

And if I turn this down you’ll see that we have malloc events, auto release, retain release all grouped together into one chunk.

Now the scope bar here allows me to change my view so I can see it by time which is what I’m used to seeing before.

Or, I can see it again grouped by that pairing or bad back trace.

So the interesting part here is that I can select each one of these events and take a look at the stack trace on the right.

And look through and see, okay, what’s going on here.

But Instruments has already told me that these are very likely already paired.

They’re already matching up.

There’s probably not too many retains, too many releases.

So the question is why is this a message to a deallocated objects and Objective-C store strong?

So we can just jump right to the code.

And I’m going to open it in Xcode so you can see it better.

And what happens, let’s see here [ Pause ]

What we’ve got [ Pause ]

is an error parameter being returned by reference from this process raw data function.

So it seems like a very standard function for parsing a JSON response.

We’ve got an auto release pool around it because it’s doing a lot of string manipulation and we don’t want to cause memory spikes, and then we’re passing our error back.

So the question is why is this crashing when we get back to our caller of process raw data right here.

We get the error back, and then we’re trying to store a response process error below.

Well, just by taking a look at the retain and release history we’ll see that there was auto release going on so that we can pass objects back to our callers.

But then those auto release pools actually popped somewhere in between.

In looking at that function that we saw there was an auto release pool written there.

So I’ll talk about this a little bit later, but what we really needed to do is was use a local NSError and get that assignment done outside of the auto release pool.

So, the other thing I want to show you here is using the leaks instrument.

I’ll just go ahead and fix that typo.

And so I could just go ahead and profile again in instruments.

[ Pause ]

I actually want to use the leaks template.

[ Pause ]

And Instrument starts up with both the template with both allocation and leaks and starts recording.

So quickly you’ll notice that we have some leaks detected in the background.

And I can just go ahead and use the app a little bit and sort of get some activity going on.

And as I scroll around it actually is interesting to type in that WWDC prefix and take a look at the view here to see anything interesting going on in our Heap.

If we zoom in we’ll even notice that like our table cells, there’s a lot of transient cells.

It’s very likely we’ve done something wrong with cell reuse use just by looking at these high level statistics.

And another interesting thing here with the retain release view is if I select one of these WWDC news objects you’ll notice that Instruments was able to quickly pair up a whole ton of retains and releases even pairing together retains auto releases and releases together.

So let’s take a look at the leaks, and I’ll remove my filter.

And I’m actually interested in a certain reachability leak.

You’ll notice that we have a bunch of different objects types, NS datas, NS arrays, network reachability.

And they’re all coming from the system configuration system library.

So are these really our fault or not?

Well, to see if they’re related I can go to the cycles and roots page and see that, yes, the SC network reachability is a root leak, and that this actually leaks an NSArray which leaks a list.

So that’s kind of nice.

I can focus all my efforts in that one location.

So if I click on the network reachability object and pull in the extended detail view, I notice I allocate it right here at reachability with host name.

And in Xcode this shows up as me using an SC network reachability with host name, and then I create a wrapper for it and then I release it.

So it looks like I’m using it fine, right?

No problem at the allocation point.

Well, if I dive right in I again get that retain release view, and not many retains and releases.

Well, I’d like to pair these up and know which one is at fault.

So we have a nice pairing assistant in the bottom left that I can bring up.

And if I select different CF retains and CF releases it will suggest ones that could be possibly be paired to it.

So in this case I have a CF retain and a CF release, and these look like they’re pretty paired.

And here’s my malloc that I just looked at and that CF release that I just saw as well so I can pair these up.

And now I very quickly dropped everything out of this table except for the CF retain that’s probably unmatched.

Double click, go to source, and here we have the INIT [phonetic] with reachability ref.

And so we’re doing a CF retain into the reachability ref.

And well, oops, I forgot to write into the alloc so that’s pretty easy to go fix.

So, alright, that is showing you the Zombies template on an iOS device and taking a look at a couple leaks.

[ Applause ]

So first steps in applying this to your app I’d very much recommend that you switch to ARC.

It gets rid of a lot of that code that the compiler can do for you and do very, very well, and also run the static analyzer.

You may have noticed there actually are some analyzer warnings, and had I looked at the analyzer I may have fixed some of those bugs without having to do any run time debugging.

And that’s always a lot better if you can save time.

Another thing if you have crashes, the Zombies template is a great first resort.

If it doesn’t catch anything, great, continue debugging.

But if it does then you just saved yourself a lot of time.

For leaks the back trace of that allocation doesn’t tell the whole story, so you can’t just stop at that top level view.

You really do need to dive down, look at the retain release history even when it’s an object that’s managed by ARC.

And, finally, you could save time now by pairing those retains and releases with the new UI.

So there’s two ways to do it as I’ve showed, the manual pairing assistant as well as the heuristic-based automatic pairing.

And you can see in this screen shot that most of them were automatically paired, and I’ve got that assistant up giving me suggestions.

Now, I should note that this is actually better if you’re using ARC and if you’re using no optimization.

And so why do I mention that specifically?

Well, because by default the profile action that I was using from the run menu it defaults to the release configuration.

And release is really great if you’re doing things like performance work and trying to find out how your app is spending its time.

You really want it to behave as much as you can the same way as you’re going to ship it.

For memory tools and memory analysis that additional debugging info is actually very useful.

And so you can set it by just using the scheme editor and so that that profile action translates directly to the profile action in the scheme and just set it to release, or set it to debug, I’m sorry.

So with that I want to talk about a couple of common Objective-C issues that you can face even under ARC because these are the issues that are behavioral, structural in your code.

And there’s a lot of really powerful keywords that the run time provides to solve some issues.

They’re both powerful, and some of them are a little bit dangerous.

And so I want to go through a couple of these so that when you see them in your code you hopefully won’t be surprised, and you’ll be able to more quickly get to a solution.

So the first thing is block captures.

Now, blocks are really, really powerful, and you can use Objective-C objectives, dispatch objects, many things inside them.

And when they’re copied to the Heap for things like dispatch A sync or as registrations they capture those reference objects strongly by default.

So if you use self it will retain self, and when the block is destroyed it will release self.

But instance variables are also implicitly reference self.

So this means if you have an under bar fu [phonetic] it’s going to reference self and use it in that way.

So let me show you an example here.

I’m using NS Notification Center API.

And I’m storing the observer token in an instance variable.

Great. So self is retaining observer token, and then I’m sending in a block, and that block is roundabout retained by the observer token as well.

But self is retained by that block, and so you can really quickly see here that this leads to retain cycle and it’s going to cause problems.

So the solution is just to use a weak keyword to break these cycles.

In this case we can just define a weakly notified self and use it inside the block.

Now, just as a note, when you are running non-ARC you may have to use the under under block keyword to also indicate don’t retain.

It means that the variable can be modified inside the block, but it has that additional effect under non-ARC.

So when do you need to look for these retain cycles in blocks?

So is it all the time or is it just sometimes?

Well, it’s mostly just persistent relationships, so things like I showed you, NS notifications or error call backs or things that recur over time.

So things like timers, dispatch source handlers, and that’s where you should definitely look to using the weak keyword.

For things like one time executions, enumeration of an array, of a dictionary you don’t need to use the weak keyword.

So let’s talk about weak just a little bit.

What does weak do and what does that mean to you?

Well, weak validates the reference whenever it’s used.

So it checks that the object it refers to is still alive and hasn’t entered it’s dealloc on another thread.

So if the object isn’t alive you get back nil whenever you use a weak variable.

This means you should avoid consecutive uses, right?

If you begin transaction on a weak variable then you end transaction, well, maybe only the begin transaction went through because your weak reference may return nil on the second use.

Finally, never do an arrow to reference with weak.

And just to show you an example of this, I can even check if weak object then I’ll do an access of the weak object’s delegate by an arrow reference.

If you get back nil then you’ve referenced no and crashed.

So better would be to use the dot syntax in this case to access a delegate, because sending a message to nil is just fine.

Or, you can be very explicit, and this is actually probably the best way to go.

Just create a strong object, promote your weak reference and then use the strong object in any way you see fit.

So one final note about weak.

Since it has to validate the reference, it’s not a free thing.

So if this is in a really, really hot loop you may want to think about promoting it to strong and using that or not using weak in that specific instance.

Which brings us to unsafe unretained.

This is, as the name suggests, an unsafe keyword.

But there’s some risk and there’s some reward.

What it says is ARC please don’t manage this variable at all.

There’s no retain when something is stored to the variable, no release when nil is stored to the variable.

Important to note, though, is if you’ve got legacy code that you’ve sort of by hand migrated to ARC, if you’re using property assign this actually means under the covers unsafe unretained.

And so you can very easily get crashes because these are dangling references to whatever the object is.

Also keep in mind that many of the frameworks will have these sort of unsafe references to your code such as delegates or data sources.

And if you’re getting a crash where it’s trying to send a data source message to your object and your object is already deallocated, you may have just needed to go into your dealloc and mill out that data source before your object went way.

So in general I would sort of recommend this as last resort keyword if you have performance issues or you need to use it for some other reason.

So one of the key words to talk about here is auto releasing.

And this one actually came up with that demo example.

So for an auto releasing variable the object has sent a retain and then auto release whenever assignment happens to it.

So out parameters are actually auto releasing by default like NS errors.

And this means that if there’s an auto release pool wrapping the assignment you can get some interesting behavior, namely crashes.

So in this case here we have an auto release pool wrapping our JSON use just like that.

We take that out error and we do the assignment inside the kit.

So the assignment to that auto releasing variable happens inside our auto release pool.

The instant we leave our auto release pool, well, now that delayed release happens immediately, and now what we’ve done is we’ve returned our deallocated NS error to our caller causing problems.

So the fix is simple.

If you write an auto release pool write a local.

So in this case we just need to write a local error, send the local error into the API, and then do the auto releasing assignment outside of the auto release pool that we wrote.

The final keyword I want to talk about is bridge gaps.

Now, ARC is great for managing life cycles at the Objective-C level, but in most of our code we still have to use C-based APIs like core foundation, core graphics, things that take void star context.

And for dealing with there are bridge gaps.

There’s three conversion primitives, bridge with just this type casting, bridge transfer which does the release, and if you like the syntax of CF bridging release better you can use that, or bridge retain which does a retain as well.

Although I should note here that incorrect bridging because it’s very explicit can lead to a crash or a leak.

So how do you use them correctly?

Well, this is the standard way of using these.

If you’re going from CF+ 1 you use a bridge transfer, and you really should think about your references in terms of where the ARC managed object is.

But I specifically want to call out the third cast because this is the most dangerous.

If you’re casting from an ARC managed reference to a void star or a CF reference, this effectively creates an unsafe unretained CF reference.

So what do I mean?

Well, if I have this in a string it’s in an ARC managed reference, I use the bridge cast, put it in a CF reference.

Well, now anywhere in between here if, say, the log info goes out of scope, I never use it again, the compiler is free to release that variable.

And now when I go to use it in CFURL create, boom, right?

And now it’s kind of a very strange crash inside the kit.

So for that just switch to the fourth one.

Use bridge retained, and just remember to CF release it later.

So I want to briefly talk about being a good citizen on the platform and what sort of testing you can do for this.

So test some real world scenarios.

Definitely test on constrained devices.

If you’re on a MAC, low memory systems, maybe low CPUs.

If you’re on an iPhone this is just older hardware.

Test your first install and first launch.

If you’re having to build up a database you may have memory spikes in that case, and these are really good test cases.

As well as large data sets.

Everyone wants to have really devoted and awesome users who use your app for years, and it’s worth testing for them that you can handle these big things.

Also, test background launch on iOS 7.

And to do that you can just use the option in the launch options to simulate a background fetch and off you go.

And, finally, make sure that you test for leaks and abandoned memory.

So under system memory pressure we’ve talked about this briefly, and I’m not going to touch on it very much.

But when pages have to be evicted it really is your dirty memory that matters.

And in this case on OS X these are compressed or saved to disk.

See this talk right here for more information, it’s really great.

And for iOS memory warnings are issued and processes have terminated.

So when it comes to memory warnings and how you can avoid termination, well, don’t be the biggest.

Dirty memory really is what counts.

You can use the VM Tracker to find out how much you’re using.

And make sure you get a chance to respond.

You’re not guaranteed a memory warning, but if you get one, and hopefully you will, it will arrive on the main thread.

So make sure your main thread isn’t blocked for many reasons, also user responsiveness.

And, of course, avoid large and rapid allocations so the system doesn’t have to act quickly.

So these are the three best ways to respond using the notifications or the APIs that get called.

And you should consider freeing up some memory before you enter the background.

So there’s the [inaudible] for that.

So to summarize we really want to encourage you to be proactive.

Start with the gauges, use Instruments, monitor, test and investigate.

Definitely avoid memory spikes.

This will lead to fragmentation of your application, and avoidance is the best policy.

And definitely don’t allow unbounded memory growth, you set generational analysis.

There are some great language tools for you to use.

And I hope that you make great effective use of them.

For more information contact our developer tools evangelist, Dave DeLong.

And the Apple developer forums are great.

Here’s some related sessions.

I just want to specifically call out the Building Efficient OS X apps so you can get a deeper understanding of virtual memory and how that works on the system.

So come see us.

We’d love to help you.

And thank you very much for coming today.

[ Applause ]

Apple, Inc. AAPL
1 Infinite Loop Cupertino CA 95014 US