What’s New in Instruments 

Session 304 WWDC 2010

Discover how the latest advancements in Instruments help you to pinpoint and eliminate performance problems in your iPhone OS and Mac OS X apps. At this highly-recommended session for all developers, you’ll gain a new understanding of your memory usage, learn to perform fine-grained CPU analysis, and acquire other new performance-enhancing skills.

Steve Lewallen: Well good morning everyone.

Thank you for coming to the What’s New with Instruments session.

My name is Steve Lewallen, I’m the manager of performance tools here at Apple and today we’re going to learn about all what’s new in Instruments.

So let’s get started.

So today we’re going to learn about a new, improved user interface.

We’re also going to talk about significant recording techniques that I think will be very helpful for you all to know.

And we’re going to talk about advancements to existing Instruments.

And finally we’re going to discuss more about our significant new Instruments.

So, let’s talk about the UI.

Front and center in the user interface is the Jump bar.

This shows you where you are and how you got there.

It’s possibly the most important UI control in the interface.

You can use this to switch between Instruments and a particular trace document and you can use it to switch between detail views of a particular instrument.

And then let’s say you focused in on some important piece of data from one detail view, you want to get back to where you came from, just click anywhere on the Jump bar.

For example, the previous segment, that’ll take you right back.

So we’ve also improved how you can access the views and see the data in Instruments.

So now you can collapse away just about anything you don’t want to see.

Collapse away the track view, collapse away the Instruments list and configuration panel on the left.

Collapse away the extended detail view.

And if that’s not enough, you can go full screen which now has the mini bar.

If you move your mouse to the top of the screen the mini bar appears, move it away it goes away.

Same thing as the rest of the full screen Mac apps.

And we’ve also made so many improvements in the call tree, I think we can declare it is the coolest call tree in the universe, if that’s possible.

So to do that the first thing we did is give you more room for that symbol tree.

I think that’s the most important part of that interface.

We did that by combining several columns together and affording more room for the symbol tree itself.

We also enabled the call tree to slide under the statistics view.

So you always had the problem where when you had a node far to the right and you scrolled and scrolled you got it into view and then the statistics scroll off the screen, you couldn’t see how much was attributed to that node.

So now you can.

Another problem that people would have in using any outline view is that it’s hard to understand what is a sibling of another node or what are the children of a node.

So we added what we call sibling banding.

So you select a node and you’ll see this vertical band appear and then you can scroll up and down as much as you want and rest assured that anything to the right of that band is going to be a child node within the selected hierarchy.

And of course, when you move your mouse over the nodes, you also get a highlighting sibling band as well.

We also added, as with Xcode 4, backtrace compression filtering.

This is really handy.

Basically, this allows you to use the slider you find at the bottom of the back trace, to collapse library boundaries down to just your own code.

So you can see what called your code and then focus on your frames itself.

Very handy.

And of course we have the source view, where we have performance data in line with the source.

What we’ve done with the latest release is allow you to access this directly from other data views.

So for example, in the picture before you, we’re on the call tree view, if we double-click on this Create New Game API, that will move into the Jump bar and the source view will come in.

To go back to the call tree, just click one of the items in the Jump bar.

Very simple.

And of course, all the annotations are colored by severity.

So you want to pay most attention, soonest, to those things colored the darkest.

And we also have full disassembly view, both for Intel and ARM and we still include the performance annotations in those views.

You can see this by function or you can see it per annotation.

So if you want to see just the disassembly for the line of annotation, you can do that.

And we have timeline views.

You can think of these as bookmarks for the trace document.

You can add these manually yourself.

So you see something important happen, you want to remember it later.

You Option-click in the ruler and add your own flag.

We also had Instrument, last year the Zombie’s instrument which detects when over-released objects are messaged.

We had that add its own timeline flag itself automatically.

And now in the latest iPhone OS 4.0 SDK, we show multistate app transitions in the ruler as well.

So when your app goes from being running in the foreground to transitioning to the background running and then suspended in the background, etcetera, you can see flags for those and that will explain to you later when you look at this trace document, a week from then you know, why CPU activity dropped off, for example.

Well it’s because it became suspended in the background.

So now let’s go and take a look at a demo of Instruments and we’ll just sort of learn our way around Instruments and solve a couple of problems using a couple of very common instruments.

So I’m going to go over to my computer here.

Let me go to the demo machine.

All right.

So a colleague of mine has been writing a reader app and I liked it so much I was using it, but I wanted it to remember the state that it had, the book I was reading and the place within the book and restore that the next time I launched the app.

So I went ahead and did that and let’s just see how that looks in the simulator.

So here it is, it restored the state.

If I go somewhere else, pick a different book here, scroll up to the title.

We’ll quit that and I’ll click it again and it restores its state back.

So this is great.

This is awesome.

I’m so excited, I’m just ready to push this to the App Store right away.

But I’ve only tested this on the simulator.

What I really should do is test this on the device as well, because the performance on the device is going to be different than the performance on a desktop computer.

So let me quit the app and let’s switch to the iPhone in WolfVision here.

I’ll switch the device, build and run and it’s compiling, linking, installing the app and still there it is.

And boy that’s taking a long time.

OK, there’s my app.

Well clearly the performance characteristics on start up was very different on the device than it was in the simulator.

So, back to the demo machine please.

What I want to do is to use the Time Profiler, new in iPhone 4.0 SDK and to see why that’s taking so long.

So let me go to Instruments and by the way, don’t get confused.

There are two sets of tools here.

One is the latest SDK tool set that shipped and one is the Xcode for our preview.

We’ll be using both in this session.

So let me open the shipping version of Instruments here and I am going to time profile the reader app.

So I will go up to my iPhone section and double-click on Time Profiler and I will pick my little reader app and let’s start it up.

And when this, when the page displays on the phone, I’m going to stop this.

So there we go.

So let’s see what’s taking so long in this code.

So the first thing I want to do is I want to focus in on my code.

So what I like to do first is hide system libraries and maybe hide any missing symbols I have.

And then I will focus in on this text view controller viewDidLoad, that seems to be taking up an enormous amount of time.

And I can see, I come to main and the view is loading.

Oh, this is where I’m actually restoring that state and we come into viewDidLoad, let me double-click on that and move this over some, ah, so I’m taking the contents of the book and I’m setting it on the text view.

Hmmm, well there’s nothing really I can do about that.

I mean, I could, I could spend the time during today, tomorrow and the next day, re-architecting my app, but we don’t’ have time for that.

So what I’d really like to do and what’s common when you have an app that requires a lot of start up preparation, is it’s a great thing to make it a background app.

So let’s go ahead and do that.

I’m going to quit Instruments here.

The first thing I need to do is go to my Info.plist and I’m going to say that actually I do want this to run in the background and I’ll save that.

And then, in case I’ve switched to the background and for whatever reason the system had to terminate my app, I want to make sure that I capture the state that I currently have in there.

What book I’m reading again and where I am in that book.

So let me go to my delegate and just like I did earlier, where I had already added save application state and application will terminate, I want to do that when I’m doing it from the background.

So let me say self save application state and save and I’ll build and run again and we can see how this works out.

So let’s go back to the device in WolfVision there and it’s running and of course we haven’t solved the initial start up cost, but let’s see how backgrounding works for us.

OK, the app has started.

So what I’m going to do now is click the Home button, it’s gone to the background and make it come back.


So it comes back very quickly.

That’s a much better user experience.

So we can go back to the demo machine again.

So now I’m ready to ship my app right?

I mean, all I did was take an API used in one place and just used it somewhere else that I never really intended it to be used for and this is going to run all the time now in the background, but what could go wrong right.

Well, if it’s running in the background, I have a responsibility to make sure I control my memory overhead.

I don’t want to be leaking any memory for example.

So I doubt there’s any leaks, but let’s check anyway.

So we’ll just stay on the demo machine and another way that you can run Instruments against your app, is actually from Xcode itself.

So I go up to run with performance tool, down to leaks.

And this will start up the app, start up Instruments on the device, etcetera and we’ll see if we have any leaks.

So the first thing I’m going to do is disable automatic leaks checking.

Leaks is automatically set up to do this every ten seconds or so, but I want to make this go back and forth on the phone a little bit before we do that.

So I’m going to just keep you eye on the Mac here, but I’m going to make that device, the app go to the background and bring it to foreground, maybe do that a couple of times.

OK. And now let’s check for leaks.

It’s analyzing the process here and that just takes a little bit of time as it goes over, oh I do have a leak.

Who would have thought?

OK, well what I want to do now is stop the trace.

I want to investigate these leaks.

So what I can do is focus in on one of them.

When you see one of these circles with an arrow in them, in the table views and in ally views, those are Focus buttons.

So if I click on one of those, I’m going to see that the address I focused on is now added to the Jump bar.

And so for example, if I wanted to go back to the view I was at, I just go up to the Jump bar once, to this leaks blocks element, I click it and I’m back.

Let’s go and investigate that leak.

So let’s focus in again and let me open the extended detail view.

And now let me actually use a stack compression to focus in on just my own code.

Great. So now I can see really just my own code.

If we zoom in here we can see, oh the leak is actually coming from the saveapplicationstate [assumed spelling] call.

Hum, well I better take a look at that.

Let me double-click on it and I put my extended detail view away.

Let’s focus in again.

So what’s going on here?

Well, we’re seeing where we have different things that are being leaked and I have this top level object called my restored data, what’s going on there?

Let me see, did I not release that.

So there’s another use of it, said object from my restore data, set object again, set object, using it to register defaults.

Oh, there’s no release.

OK, so now I need to go and I need to fix that.

So let me hit the Xcode button here in Instruments and go back to that code and I’m just going to go and add a quick my restore data release and I will do a little trick here just to get it back on the device, build and install and run it here.

And then we’ll stop that.

Now another way to obviously, we’ll just wait for it to completely terminate on the device.

OK, so we’ll go back to the run with performance tools menu and we’ll say leaks again.

This will re-use the existing Instruments document it was using for leaks and now I will go and do the background, foreground exercise again.

Do it a couple times, few times there.

And we will check for leaks again.

Analyzing the process.

Excellent, no leaks.

So now I think that the app is in better shape to go to the app store.

I should still add a splash screen and continue on with the refinements, but it’s better.

Now I do want to draw your attention quickly to what is in the ruler bar.

These icons here.

These are those state transition flags I told you about.

So if I click on one of these, I can see that it will tell me that the reader switched to the background running and if I navigate over I can see when it came back to the foreground and we see all of these state transitions.

So that again, is very useful, especially when you go back later and try to understand why you saw a change in performance data that you’re gathering.

You can say oh, it was in the background and suspended and so that’s why.

OK? So that is a brief exercise.

If we can switch back to the slides, in using Instruments and how you get around in it.

The major controls, the Jump bar, back trace compression, using Instruments to target an app directly from Instruments itself, as well as from the Xcode run with performance tools menu.

And of course, two of the most important I think, Instruments we have, the Time Profiler to see where you’re spending your time and the leaks instrument to see if you’re leaking any data.

So you should get to know both of those instruments.

So moving on, I think that in this latest release of Instruments, we have the greatest, the most efficient Instruments ever.

And we provide you the greatest latitude ever on how you do you recording.

So first let’s talk about what we call immediate versus deferred.

So immediate mode is this classic Instrument’s mode.

You hit the Recording button and instruments processes and displays the data immediately.

And what’s great about this is that you can tinker with your device, or you’re in a game, you shoot off a missile, there’s an explosion and you can see the CPU usage spike immediately, you can see the frame rates change immediately.

So you it gives you that immediate feedback and you know in that data set, exactly what you want to look at.

But the down side is that hey, Instruments is running on the same computer and it’s using some of the CPU, too.

So sometimes, especially for sampling instruments, like Time Profiling, you may and this is an exaggerated graph, but you may get gaps or valleys in your data collection.

You data collection isn’t as packed tightly as you’d like it to be.

So that’s why we added a mode we call deferred mode.

Deferred mode processes and displays data after the recording is over.

So the benefit is, Instruments stays off the CPU, it’s not doing anything until you hit Stop and then it consumes and displays all that data.

The down side is of course that it’s hard for you to correlate that when you, you know, when you shot missile one versus drop bomb, which of those corresponded to the big frame rate drop.

So you have to judge what you’re going to use.

But with deferred mode you’ll get more tightly packed samples and data set and that can be very useful when you’re looking at something that’s very, very time sensitive.

So when you launch your apps, we’ve had various options but we didn’t have them in a very good place in the UI, so we’ve moved them to the targets user.

One of the options is to choose where you want your standard I/O to go.

So you can have it go to the Instruments console or the system console or you can just discard it.

Maybe you have a very, very chatty app and you don’t want the system overloaded with all that I/O, so you can just discard it completely.

And on the Mac, if you have a 32, 64 bit capable app, you may want to run it in 64 bit mode and 32 bit mode, especially if it’s using a lot of memory, a large address space, to see how it performs.

So you have those controls.

In the simulator, you sort have similar control.

So you can change where your console I/O goes, but you can also say what kind of device you want the simulator to simulate and what kind of SDK you want it to use.

So that’s very helpful to know as well.

Now if you’re on Mac OS X and you’re developing launch agents, agents or daemons, which are little headless services that you can have on the Mac, it’s always been a problem to be able to analyze that primordial start up segment of time when the daemon is first hooking up to everything.

A lot of times that can be the most interesting thing to look at.

Does it leak any memory right there?

You know, what’s going on with CPU usage right there?

So what we did was connate tightly with launchd, so that Instruments can start analyzing the app as soon as it’s created by launchd itself.

This is exactly the same amount of data you’d collect if you’d actually launched an app directly from Instruments.

So how do you do this?

Well you go to the targets user and you pick the daemon or agent.plist.

The plist, not the actual binary.

Because Instruments needs to read that plist and find out some things.

Then you hit Record and you’ll get a dialog like this on the screen.

Basically Instruments is saying, OK go do whatever it is you have to do to tickle launchd to launch your daemon.

Maybe it is connecting up a network or creating a file somewhere or something of that sort.

As soon as that happens, this dialog will go away and Instruments will start recording right away.

So you’ve gathered all that critical data.

And let’s also talk about wireless, for the phone.

So it’s possible for you to actually use Instruments, to profile your apps running on the phone, without being connected to USB.

So why would you want to do this?

Well it frees up the USB port if you’re an accessory developer.

Let’s say you’re developing something for an automobile manufacturer, some docking station or something of that sort and you want to see the performance of the app when you connect that accessory up.

Or let’s say that you’re a game developer and you’re using that accelerometer and it’s a real pain when you’re testing your game aggressively and that cable is slapping you in the head as you’re moving it around and or it’s in your way as you grab the device and making it very awkward.

You just go wireless.

Disconnect it from USB.

It’s much easier to do.

Now there’s one thing that you need to remember when you go home or go back to your workplace and you try this, if it doesn’t work the first thing you need to do is make sure that you have Bonjour multitask enabled, on your Wi-Fi point.

If you don’t, it’s not going to work.

Now to find out how to do that, just refer to the standard Bonjour documentation.

There’s isn’t anything iPhone- or Instrument-specific about that.

So how do you enable this?

Well, you hold down the Option key while you go to the targets user and you’ll see your device appear, but it’ll have Enable Your Device Name Wireless.

So you select that and then you’ll see your device appear but it will be kind of grayed out.

Because it’s handshaking with your machine and getting everything set up and a second or two later it’s ready and then at this point you have not one, but two Instruments daemon’s running on your device.

You have one dedicated to USB and you have one dedicated to wireless.

See you don’t want that because they’re both using resources.

So you want to unplug your USB and when you do that the USB-dedicated Instrument’s daemon will go away, you’ll see that item disable and now you can just go completely wireless and you can do everything you can do with Instruments normally over USB, wirelessly.

Of course your performance will vary based on the performance of your Wi-Fi hotspot, etcetera.

But it’s a very handy thing to do.

Now when you want to disable this, because again, it is taking up you know, radio resources and it’s not quite as fast as USB.

Just hold down the Option key, go back to the target chooser and the menu item will change into Disable Your Device Named Wireless.

Select that and it will go away.

At this point, the only way to get it back is to connect up USB, so that you can re-enable it.

Another way to disable it is actually to just reboot your device.

And so now we’ll just do a wireless demo.

I really don’t think so.

No wireless demo, not today.

Actually I did have a slide for that, but not today any longer.

So let’s move on to symbolication.

Well Steve, wireless, Steve.

So often when you take a trace, it has to do with your own code.

There are instruments actually in the app that don’t have anything to do with actual code, but most of them do.

And what’s very critical on that is to have Instruments be able to switch those hex addresses into real symbol names, like a function name.

In order to do that, Instruments will look in SDK paths to find a symbol-rich data and it will also use Spotlight to look it up.

In order for Spotlight to help us find your symbol-rich binaries, you need to make sure you build your app, whether it’s for the phone or for the Mac, DWARF with dSym file.

So just go to your debug settings and probably, it might say DWARF, switch it to DWARF with dSym file.

Now, if you do that, you need to still make sure that it’s visible to Spotlight.

Spotlight won’t look in any encrypted disk image and you might have Spotlight already set to not look at certain volumes.

So you need to make sure Spotlight is looking in that location.

Now if you don’t have any symbols, maybe someone else took a trace, they didn’t have all the symbol-rich binaries and they handed it to you or you did find yourself in a situation where, pardon me, where Spotlight could not find the binaries.

You can always resymbolicate them after the fact.

It’s very simple to do.

Go to the file menu, select resymbolicate document.

You’ll see a sheet pop down.

You select the paths that point to the right symbol-rich binaries or directories and we’ll recursively look in all those and then you just click symbolicated.

It’s that easy.

Oh and turn all the hex addresses into symbols.

And then I’d like to make one note about tracing, using Instruments against apps in the iPhone 4.0 SDK, just because it’s hit us.

We’ve all got accustomed to this, so I hit the Home button, my app quits.

Instruments stops if I was tracing with it.

But now it may no longer terminate the app right, the app may just go to the background and Instruments will dutifully keep tracing.

So what you need to do is just get in the habit of stopping the app from Instruments itself.

Pressing that Stop button.

If, another issue that you can run into is that you’re actually debugging your app and you hit the Home button and you forget you did that and you try to trace with Instruments and Instruments is going to put up an error saying the debugger has control of this process.

So what you need to do is also remember that when you’re using the debugger, to hit the Stop button as well before you use Instruments against the same app.

So we’ve made various improvements to our existing instrumentation we’ve had for a while and I want to go over those improvements.

So first is Time Profiler.

This is for both platforms and it’s the most efficient time profiling mechanism by far that we have.

It does everything from the kernel.

Takes its samples in the kernel and you can use it in two ways.

By default, you’re looking at running threads.

So you’re just looking for code that is running but it’s not running very efficiently, so you want to find those hotspots.

But you can also use it to track down and all thread states mode.

You go to its inspector and click All Thread States.

So it can find threads that are blocked.

So maybe you have a choppy app or it’s deadlocking somewhere, you switch Time Profiler to All Thread States and you’ll find those threads as well.

And of course, it can also see all processes on the system, not just a single process.

So that’s really nice when you want to see you know, what in the world is going on in the system?

Why is it so slow?

You use all processes and you can see who the guilty party is.

So we’ve also added kernel frames to the back tracing and call tree Time Profiler.

And we did this in a really cool way.

Because you can see, but logical way, you can see the user space stack frames, but in a continuous line then you see the kernel space.

So you see yourself transitioning all the way from user space, into the kernel and maybe into some kernel extension.

It’s really kind of far out to see it when we finally put that together.

It’s really awesome.

If you’re a kernel extension developer, it’ll be really important to you.

And of course, Time Profiler, as we saw a moment ago, is also available on the iPhone 4.0 SDK.

And it’s far more efficient than CPU sampler that you were using before and it doesn’t really seem intuitive, but deferred mode really matters for tracing the device as well.

Now of course, when you’re in immediate mode on the Mac, you have Instruments running and you have your target process running, obviously they’re fighting over the same CPU, but the reason that it matters even on the device, when Instruments is still running on your Mac, is that if you’re in immediate mode that Instruments daemon on the device has to take that data out of the buffers we’re collecting it in, package it up, send it over the wire, over wireless and all that takes cycles.

So it makes a big difference by putting your trace in deferred mode before you take it for the phone as well and significantly proportionately more than it actually makes on the Mac and that will hold on to those data buffers unless we fill them up, in which case we want to get rid of them, get them onto the Mac, so we don’t constrain the memory that your own app needs or you hit Stop on the recording.

So deferred mode really does matter, even for device time profiling.

So we’ve also added a new instrument called HeapShots and HeapShots, well actually it’s a feature of Allocations Instruments, pardon me.

HeapShots will actually allow you to track down what we call abandoned memory.

We call it abandoned because it’s not leaked, but you allocated it, maybe you used it once long ago and you basically abandoned it.

You forgot all about it.

There it sits, still referenced, you run leaks, you’re not going to see it but it’s taking up space until you quit your app.

We find that this can be an even more serious problem than leaked memory and if you have an app that maybe you didn’t first develop or that’s several years old, this will start to build up.

Because someone, somewhere, wrote some caching routine for a feature that has since been removed, but no one knew about the caching routine.

It’s been dutifully caching away its memory and not being used at all.

So that’s a waste.

So you want to, you want to analyze your app using the HeapShot technique, to find that abandoned memory.

We also have VM tracker.

This allows you to look at your VM usage of your app on the Mac, in the phone.

And again it’s part of allocations template, as a separate instrument.

You can identify, it identifies reasons by tag and you can look at things like resident and virtual size.

And this is great to use against a Core Animation app or something where you’re loading images off a disk and you can associate how much memory that’s taking with the actual file on disk.

So that’s very cool.

So now I’d like to demo, just a few advanced recording techniques, take a look at immediate versus deferred mode as well and see why that matters for you now.

So if we can switch to the demo machine please.

This will be just a Mac demo.

OK. So let’s start up Instruments and we will launch the Time Profiler instrument.

I’ll choose the Mac version there.

And let’s look at a little app that, that was written a long time ago called the GL Mech app and we’re going to first run this in immediate mode.

We’ll get this guy going here and let me bring up Terminal and let’s see how much CPU instruments is using right now.

So Instruments is using, here it is, it’s going to bounce around between 4 and 7%, depending on what’s going on as the GLUT Mech app runs.

So 4 to 7%, so let’s quit the GLUT Mech guy and let’s go up to the file menu and let’s go to deferred mode.

And let’s run our little guy again.

Now this darker appearance in Instruments is deferred mode.

So when you see that, that’s why.

And let’s go and look at top again and let’s look for Instruments.

Now basically, other than a burp here or there, Instruments will stay at 0% for long, extended periods of time.

So this makes a significant difference as you’re analyzing some very CPU intensive application.

So that is immediate versus deferred mode.

That’s an important question that people have had about CPU usage and so we thought we’d answer it now.

Another thing that people have wanted to do, they’ll ask me, how can I precisely time filter my data between two points.

Say when call A was made and call B was made or event A happened and event B happened?

Well there’s a couple of ways to do that.

A sort of really poor man’s way of doing it is actually tagging it with a console.

So let me run the GLUT Mech guy again and we’ll say, go and stop, go, stop, go, stop.

So let’s see how we could time profile between a go and a stop.

If I go down to my console here by using the Jump bar, I’ll select Console, we can see some console API output that I had added to that app, to just say starting animation when you hit the Go button and stopping animation when you hit the Stop button.

It’s very simple.

But what’s interesting is that, the both the console and its left hand margin and the timeline view, both have timeline inspectors.

So if I zoom in on that and I take the inspection head in a timeline, I can see that the inspection head in the console move around to where that actually happened.

And if I move the console.

[ applause ]

I can drop it and it will move to where that is in the timeline.

So let’s say on this second starting animation, I want to begin a time filter here.

Obviously this is something that you’d do after the recording is stopped, otherwise you know, the data is going to be changing and driving you crazy.

So I can use this inspection range tool bar item along with that, so I precisely move to this point in time by selecting this location in the console and now I’m going to click this left hand Inspection Range button.

Now what that’s going to do is say, begin a time filter from this point on to the end.

So I’ll press that and now it’s filtering it from that point on to the end.

And now that that initial event has been filtered out as well in the console and so now I’ll move to my, to my next stopping event and here it is.

And now I can use this right hand side button in inspection range and say OK now stop the time filter there.

I’ll do that and now I have a precise time filter and I can go back to my sample list or my call tree and all this data is filtered within that time range.

So it’s pretty precise.

But, on a system that’s really loaded down, that I/O is not really going to come out exactly when you emit it and so there’s another better way to do this as well.

So if I, I’ll zoom out here, what I can do is I can use dtrace and this will work for the Mac, both for Mac apps and for simulator apps.

So if I go up to the Instruments menu and I say trace symbol, it knows all the symbols it’s seen, so I can say start animation and I’ll add a probe point for where start animation is really actually called.

And remember, dtrace is managing all this from the kernel so it’s going to be really precise and I can do that as well.

There’s a command accelerator for that, so I can just say command t and I can say, stop animation.

So now I have dtrace probe points for starting and stopping animation.

So now let’s go ahead and run the GLUT Mech app again and let’s have it do some things.

Go, Stop, Go, Stop, Go, Stop.

And let’s quit.

And now we see that in the other dtrace instruments, we see these are basically representing how deep a stack was, but there are points and explicit events where these events actually occurred where it actually started animation and stopped it.

So now if I want to filter, I’ll just click on my start animation instrument first, I’m going to use the inspection range again to select the first event here.

Now we’re right lined up and I’ll start my time filter there and then I’ll go to stop animation and it’s conveniently moved the inspection head to the first location.

Stop and I’ll use the right side of the inspection range there and now I have a really, really precise time filter.

If I go back to my Time Profiler instrument and select and have the call tree review or look at samples list.

All this data is going to be precisely filtered down to that time range.

So those are very useful techniques for how you trace data and how you look at it.

Now this is, this is a pretty neat little set up I have here.

I have set the recording mode to deferred, the default mode in Time Profiler is immediate and I have added these probe points for starting and stopping the animation.

So I’d like to reuse this and I can by making this a trace template.

So I can go over to the file menu and say Save as Template, a sheet will pop down and this directory that it opens up to is in your user account.

Library Applications for Instruments Templates.

And I can say, call this you know, my cool template and add a description, choose an icon or something.

This is fine for me though and hit Save.

And now both Xcode 3 and Xcode 4 are aware of that template.

They’ll look in that directory and Instruments of course is aware of that template as well in its own user section of the template chooser.

And now I can reuse that over and over again.

So those are some techniques for doing a performance analysis and the difference between immediate and deferred mode and I thought you would just find that useful.

So let’s just go back to the slides.

[ applause ]

So we also have some new instrumentation in the latest SDK.

One of the things we’ve added is the Energy Diagnostics set of instrumentation.

This is a large grained measuring technique to look at energy consumption of your device, with or without your app running on it.

It will measure battery power, the CPU usage of your app, as well as other important services on the phone like media services.

And it will track the on and off state of vital devices that consume energy, the screen being a really big one and Wi-Fi and Bluetooth, etcetera.

We also have another really cool new instrument called the Automation instrument.

This is totally awesome.

You can actually use it to automate and exercise your app.

So you can simulate clicking on various controls in your iPhone app and if you use it in conjunction with say another instrument, say leaks and then you run, you have some, some set up where you run this before you make each submission or an internal release or before your GM, you can check for pass/fail messages and other results, via the JavaScript that you write with this instrument, use with this instrument.

And then you can store those results away in some separate database perhaps and you can just export them as comma separated values for example and then refer to them later on and see how you’ve been progressing in your QA testing.

There’s also a really cool OpenGL ES Analyzer instrument.

This is really amazing.

It can actually, of course identify just bottlenecks in your own code, but it can provide you actual advice, recommendations as to how you can improve the performance of your GL app, which is just really cool.

It just comes out in English and says hey, you’re doing this and you should be doing this other thing.

So it’s a little expert in a box for you.

And finally, let’s move on to our really new instrumentation, the Xcode 4 developer preview.

Namely system trace.

So this provides you with comprehensive information on the entire system with one or more processes, looking at thread scheduling, why they’re scheduled, how long they stay on a core.

And you can look at system calls and VM operations, etcetera.

The system trace template is made up of three individual instruments.

The scheduling instrument, this tracks threads, contact switches, the reasons why, the tenures, how long they’re on.

The system calls instrument, these are calls into the kernel from user space.

And the VM Operations instrument, so you can see page cache hits and zero fills and all sorts of cool stuff there.

And also, system trace is the first set of instrumentation really that takes advantage of a new feature in Instruments, called the highlights view.

So system trace especially, can just record a ton of data, but we have other instruments that can do that too.

And it’s, it’s a real labor sometimes to have to be able to sift through all of that data, looking for the problems, especially when you have to correlate a lot of data together.

So with the highlights view, we take all the key statistics, the very core statistics and we summarize them in little graphs that show you that the top five or so extreme issues, such as processes or contact switches or something, and you can quickly key in on that by looking at those graphs to see where you really want to dig in, rather than just in looking at all the data then just you know, becoming bleary eyed at all, everything that’s presented to you.

So I have one more demo.

It’s a pretty short and simple one.

But let’s go to the demo machine and take a look at system trace.

[ Pause ]

So, as I said, I have the developer, Xcode developer preview installed as well, so I’m going to launch that version of Instruments now.

And I wanted to show you what system trace can tell you about and visualize for you, as to why it seems so simple, but why an apps performance is degraded when some other app is doing something intensive.

So let’s look at system trace and we’re going to look at all processes here and I’m going to open up a couple of apps here.

A little app to scale down images to create Mip maps and our little friend the robot here.

Let me get these guys going.

Now the previous version of that, that robot we saw, actually put a sleep in it.

Because it made me nauseous to see how hyper this guy actually runs.

Apple had this example a long time ago and it used to run very smoothly because machines back then were a lot slower.

Now machines are so fast it looks like it had a bit too much caffeine today.

So, but I want to show you now, that the change in performance when I start my Mip maps app.

So this is going to use a lot of threads and be pretty intense.

And it’s slowed down dramatically and we can see the reason why if we actually trace this with system trace.

So I’ll start that app in deferred mode here and I should have actually started that before I hit that button there.

So we’ll take some data and that’s enough and, this is a MacBook Pro here.

It’s just a couple cores and it’s gathered just a ton of data so it takes a little while to process that, but luckily it’s doing it deferred.

So now let’s quit our two apps here and let’s look at what we have real quick.

So if we go to the highlights view, we can see and we’ll expand this, a chart of what’s going on and we can look at the top here, at the Scheduling instrument and if we look at process contact switches and what’s going on up here, a great deal of threads are being switched on and off core.

So there’s far more threads that want time in this scenario than there are cores to service them.

And if we, if we click on one of these charts, say process contact switches, we can see that the Mip map utility has had a huge number of contact switches, but GLUT Mech which is a purely single threaded app, has a lot of contact switches too and if we use the little Focus buttons again on GLUT Mech, we look down and we go again to all thread states and now we can or all threads and we can see how this Jump bar has developed.

We were in summary view, process summary, GLUT Mech threads and then thread tenures and we go over to this reason column, we can see a lot of preemption and blocks.

So this poor little robot was having a hard time running on a system with so many other things going on demanding time of the cores.

So I just wanted to show you this example because it demonstrates how you should use the system trace.

You need to be aware of and we can go back to the slides, aware of the system you’re running on.

It really is a system trace and so one of the good things to do with it, when you’ve made sure that you have the state that you want to reproduce in the system, is to run say a heavily multithreaded app on a two core machine and an eight core machine or a memory intensive app on a machine that has very little memory and one that has a lot of memory to see how everything then works together to give you or degrade your performance.

So that is a demo of system trace.

And so in closing, I’d like to just say that we strongly believe you know, that Instruments is your go-to tool for performance analysis and it clearly has more capabilities than ever.

But I hope it’s still in a way simpler to use and it does provide more accurate insights than ever from what’s going on in the kernel all the way up to user space.

So, there are many sessions following this one throughout the afternoon and the rest of the week that I strongly encourage you to attend, that will be using Instruments in one way or another or demonstrating, specifically, how you use Instruments for memory analysis, CPU analysis, etcetera.

And if you have anymore information, please refer to Michael Jurewitz and developer tools evangelist and the documentation online, as well as the Apple Dev Forums.

Apple, Inc. AAPL
1 Infinite Loop Cupertino CA 95014 US