Advanced Tips and Tricks for High Resolution on OS X

Session 245 WWDC 2012

Dive deeper into making your apps stunning for high resolution on OS X. Learn how to work with OpenGL surfaces and bitmaps, handle custom layer trees, set up notifications for resolution changes, and examine how to get great performance when laying out different types of content onscreen in a high resolution environment.

Patrick: My name’s Patrick Heynen and I’m here to talk to you about high resolution again but this time in a new and more trickier fashion, so what are we really...what’s the purpose of this talk. Well, any of you who’ve been involved in writing real products know that 80% of the tasks are easy and it’s that last 10%-15% that’s...where a lot of the important stuff happens to make your product actually go out the door. What we’re going to do in this session is try to go deeper into high resolution. Pull back the covers a bit, talk a little about how some of that works and how to take full advantage of all the new APIs to achieve full pixel precision so that you can both work around any subtle bugs that you might have and achieve the highest quality you possibly can, also leveraging advanced [coached 00:00:54] technologies under high resolution if you’re using those things like Core Animation and OpenGL and just how to get the best visual quality and performance for your application on this wonderful new machines that you may have hopefully gotten a chance to look at and play with this week.

All right, so what are we hoping that you will learn in this session? How to work with OpenGL contexts? We’re going to cover some very specific things about OpenGL contexts that apply to usage and [upkit 00:01:22] in high resolution displays. Also, if your application takes advantage of Core Animation directly and you have custom Core Animation layer trees, there’s some unique considerations that we’re going to tell you about, how to handle them properly, also drawing into off screen bitmaps. It’s not all about on screen into window drawings. All things we made easy, artwork and all the [cores 00:01:47], resolution independent drawing. Well sometimes, you have to do stuff yourself. You have to draw into pixel into bitmaps yourself.

We’re going to cover some of the things you need to be aware of for that and most importantly, once again, I want to remind you the thing that makes high resolution different on the Mac and the Mac different in general from iOS right now is that we have much more dynamic display environment and you can have multiple displays and you can have any one display going between low resolution and high resolution and then in time, given how the Mac works, it’s a much more kind of dynamic environment. That’s a strength but it’s also a situation that your software needs to really take into account and handle appropriately. Then we’re going to go to sort of an interesting deep dive and important one because text is really the primary feature right now. If you’ve had a chance to look at these displays, it’s all about the text. You just look at them like, “Wow.” This is almost like looking at a laser printout.

Well, but there are some implications for some of the text and font technologies that we have in our system and for those of you who spend a lot of time with texting applications, you may want to be aware of these. Also, system final, some best practices throughout all these different sections about how to achieve quality and performance under high resolution. We’ve got a lot of material here. Hope you had your coffee. As a brief background before we dive into the individual sections, I just want to go over what the technology we’re talking about is if you haven’t been to the introduction talk. We have new high resolution display modes for retina displays. Each one...and the consequences of this is that screens and windows each have a 2 to 1 pixel point density ratio and the frameworks are providing automatic scaling to your applications to ensure a consistent coordinate system between 1x and 2x operations. It means your software sees the same thing going on for screen, view and wondow coordinate systems across both standard resolution and high resolution which is great for compatibility but it does mean you abstract it from the pixels so it’s sometimes you need to know about the pixels and that’s what this talk is about.

Of course, final, the Quartz Window Manager ensures consistent presentation across multiple displays. This is what’s allowing things like window drags between a high resolution retina panel and a low resolution external display to work seamlessly, but there’s some details which we’re going to go into. Okay, speaking of details, I’d like to bring up Mr. Chris Dreessen who’s going to tell you about NSImage.

Chris: Thank you Patrick, [clapping] so let’s talk about NSImage. As most of you probably know, NSImage can contain multiple NSImageReps and until now, you’ve probably not been taking advantage of that feature. You’ve most likely been using a single bitmap representation in your artwork. Now that you have, let’s say, two bitmap representations, how does NSImage pick which one to draw and the important thing to bear in mind when you’re wondering how NSImage does this is that it doesn’t actually make a distinction between high and los resolution. It really just cares about pixels, specifically when you draw, it’s going to try to find the representation, the smallest bitmap representation that has as many pixels as the destination so a picture is worth 1000 words and high resolution is probably worth 4000 words so I’m going to show you what I mean by that. We’ve got our image here that’s got a 1x ping and a 2x ping and it’s probably tough to see on screen but you can definitely tell the 1x one has less detail and physically they’re the same size but one has way more pixels and from NSImage images perspective through, it looks like this. We’ve got a 2x representation that’s much larger.

If we’re drawing without any scaling into a 1x and 2x destination respectively, it’s kind of easy to figure out what’s going to happen though draw without any scaling, 1x gets 2x, 2x gets 2x but what happens if we do something a little unexpected? Let’s say we draw to something that’s 100 pixels tall by 150 pixels wide, what are we going to get and it’s not the 1x representation. We’re actually going to get the 2x representation and you can see it’s stretched there and this produces a better quality result because you’re working with more pixels from the 2x representation but there’s a few places where this might happen and you might be a bit surprised by this result instead of pleased by the higher quality, specifically if you’re stretching images, especially three part images or banners. In this case, we’ve got two end caps that aren’t going to be scaled and a middle piece that’s going to be stretched over the entire center and what happens is that 2x is...we’re going to notice there’s no scaling at all on the end caps and draw the 1x representations. In the center piece, we’re going to notice we’re covering a lot more pixels than the 1x representation has and the 2x representation involves stretching a little bit less and that’s probably not what you want, especially if you have a clever artist that’s taking advantage of the actual pixels and isn’t just giving you scaled up, scaled down artwork.

If you’re here in this case, recommend instead of drawing these yourself, you use NSDrawThreePartImage and NSDrawNinePartImage and these are going to tile the image instead of stretching it so what do I mean by tiling? Well, if you take a look at these images here with this grass texture, this is stretching. You can see the middle pieces; well I hate to say the word again but, stretched. If we’re tiling, we don’t actually pull it out like that. We instead go ahead and draw the image multiple times adjacent to each other. If your image is only say 1 pixel wide or 1 pixel tall and you’re stretching it like that, you’re not going to notice the difference but tiling gives us the information we need to know that we don’t actually have to scale this thing and choose a 2x representation. If you really don’t want to tile like you have a grading on something you’re trying to stretch, consider this API called setMatchesOnlyBestFittingAxis. It tells the NSImage that if one of the image reps fits perfectly on one axis, it’s okay to use it even if the other rep fits a little bit better on the other axis but not perfectly. The other thing I want to mention here is if you’re drawing off screen which is you’re probably using NSImage lockFocus and unlockFocus. There’s a few things which you should really be aware of.

The first is try not o use NSImage lockFocus and the reason is that your drawing is going to be flattened into a single bitmap so all of the colors based information and resolution information, that bitmap is going to be crystallized there, you’re going to throw away detail. We have a new API in 10.8 called NSImage imageWithSize:flipped:drawingHandler and drawingHandler is a block there so if you’re using lockFocus and unlockFocus before, all that code between those two calls you’d now sandwich in your block and I want to especially call out this image...or how this behaves with regards to caching. Basically, the first we draw the image we’re going to invoke your drawing block into an off screen bitmap that’s appropriate for the destination so if we’re going to a 1x window, we’re going to invoke it against a 1x bitmap and then if that image is drawn repeatedly there we’re going to draw from the bitmap and not invoke your block.

If that window is then, say, moved to a 2x we’re going to...the next time it draws, we’re going to re-invoke your block into a 2x bitmap and then redraw that bitmap as necessary, so you have that caching behavior. You don’t have to worry about losing performance by switching to this if you had a lockFocus based version before. The other thing I want to mention is if you’re trying to manage your own bitmap caches of vector artwork or other things, you may be able to get away with just using this instead of managing it yourself. There are some cases where you’re not going to be able to use that because you’re capturing transient state in your drawing and having the block invoked repeatedly might invoke different things and it’s not worth your effort to capture that state, for capturing the bitmap is just fine and that works but you need to be aware of a few things. The first is you’re going to want...still want to create a multi-rep image and you’re going to do that by explicitly creating NSBitmapImageRep and this is what lockFocus are doing behind the scenes.

If you look at our snippet here, it looks like a lot of code. It’s actually just two method invocations but the important thing I want you to look at are the pixels wide and pixels high arguments on line 3 and 4 here where you see we’re multiplying our width and height by whatever scale factor we’re targeting. I said we want to add multiple bitmap ImageReps so we’re going to call this probably once for 1x and once for 2x and if you notice the very bottom line, we’re calling myRep setSize with the size and points and what that’s doing is the...we have the virtual size and points and the physical size and pixels and that lets us know the resolution of the image and that’s important when we get to drawing in a second. The next thing you’re going to use, you’re going to use NSGraphicsContext to render into this bitmap and this is going to be a sequence of calls where you’re going to tell NSGraphicsContext to save the current graphics state so that any existing drawing that’s going on has something to return to that makes sense. You’re going to replace the current context and you’re going to do it using this method here, NSGraphicsContext graphicsContextWithBitmapImageRep and that takes the bitmap image we just created and I was mentioning calling setSize explicitly to communicate that resolution.

That’s very important here because this will set up the transformation matrix on the context automatically, and then in here, we’re just drawing a red rectangle here. This code doesn’t have to care whether or not we’re drawing a 1x or 2x. The scaling handled by the context does that for us and finally when we’re done drawing, we restore the current graphics state and that allows to recall within a views [inaudible 00:10:56], if you can do other drawing and whatnot. If we don’t call that we’re going to have problems so the takeaway on that though is really...NSGraphicsContext will automatically set up the scale for you if you’ve built the NSImageRep correctly and you should really try to take advantage of that. The other thing I want to call out again is you do need to invoke that code multiple times to get best results, once for your 1x screen and once for your 2x screen. Some other things I want to call out, there’s a few innate methods in NSImage we’d really like you to move off now. compositeToPoint, dissolveToPoint and the problem is that they don’t fully respect the context transformation matrix and you’ll notice a lot of our drawings these days, 1x and 2x, really ties in with that context transformation matrix.

Most of the time we get it right but there’s [edge 00:11:42] cases and if you move off of these, you won’t hit them. A good rule of thumb here is that if the image doesn’t begin with a draw, don’t use it to draw the NSImage. Instead, here’s the master drawing method for NSImage drawInRect: fromRect: operation: fraction: respectFlipped: hints. It sounds like a mouthful but if you look at this little snippet here, the important part is really just the destinationRect parameter. Everything else you can just copy and snip the...or copy and paste the code here. It will be available later but that will handle 90% of your drawing cases, probably more. That covers what I want to tell you about NSImage. I know some of you are doing OpenGL stuff and you’re going to be interested in this next section about OpenGL and high resolution. If you just run your app today on modify and [inaudible 00:12:26] systems, you’re going to notice things look scaled up and that’s because we create the surfaces as the standard 1x resolution. This is the most compatible thing to do because points and pixels still match but it’s not the result you’re probably looking for.

If you want to have nice, crisp results on these displays you’re going to need to do a little bit of work to opt into this and you do that using this NSView method called setWantsBestResolutionOpenGLSurface and you pass a YES and that tells us that we can allocate a full 2x surface for you and you have to do a little bit more than though. If you don’t update your glViewPort code, you’re going to find you’re drawing in the wrong place and glViewPort takes arguments and pixels and until now, you’ve probably been getting your arguments for glViewPort by just asking for your viewsBounds. In 2x of course, the pixels points don’t match up anymore and there’s this method called convertRectToBacking. We’d like you to use that. It’ll take your local points and convert them into display pixels and you’ll use the results of that to pass on to glViewPort.

Some other things I want to mention is that if you’re UI stuff, you’re probably going to want to incorporate the UI scale factor into your model-view transform and that’s things like buttons and text appear the same size physically in the real world as they did before otherwise it’s tricky to click very tiny buttons. The other thing is we have way more pixels, you’ll probably want to update you texture resources to take full advantage of them. Let’s go ahead and show you exactly what I was talking about here with the Chess application MacOS 10. You’re probably all familiar with chess and let’s just run it here. We’re on a 2x screen and this is an unmodified version of the chess program and if you ahead and look at our board, zoom in, you’re going to notice the pieces are a little blocky, especially if you compare the text up here with the pieces. You’ll notice they’re not the resolution you’d want to be rendering at. I mentioned we had to opt in and I’m not specifically familiar with chess but I do know they use an OpenGL view so I’m just going to find where the OpenGL view is. They have this NBCBoardView class and let’s check the implementation of it and just find the inIt method so here’s Innate with frame and their OpenGLView subclass and here’s what they call the superinItWithFrame and if we just add a self setWantsBestResolutionOpenGLSurface here we can see the results.

That looks crisper, I mean, these pixels are a lot sharper but there’s a problem. I’m not quite sure what it is. [laughter] Oh, the board is really tiny, that’s the problem. I mentioned we had to update or call the glViewPort. It’s expecting arguments and pixels so let’s just see where we’re calling glViewport and here it is in the NBCBoardViewDraw and we have this variable called Bounds. If we scroll up, we can see that Bounds is definitely equal to selfBounds so we just do selfConvertRectToBacking and pass in these existing Bounds argument, now we’re going to have a BoundsRect and pixels and let’s see what that looks like. That is the right size and if we zoom in we notice these pieces are really crisp but they match our game center warning resolution too so that’s fantastic [laughter] so that was a simple case. There’s more complicated ones but for a lot of cases, this will be exactly what you need to do.

Some other things you should be aware of. glViewPort isn’t the only thing that takes device dependent geometry. It’s not the only thing that picks pixels. Scissors, stencils, many other OpenGL functions do expect input in pixels so you’re going to need to modernize your code to use convertRectToBacking or convertPointToBacking there as well. In general, we find it may be easier to just, when dealing with OpenGL, convert all of your inputs into pixel space and then back into whatever your OpenGL world looks like. You don’t have to do this but it tends to simplify things. The other bit, if you have pre-rendered text or [gooey 00:16:27] elements, you’re going to want to re-render those and probably have separate 1x and 2x versions that you can display. Text especially looks not so great when we scale it up or down. Something else you should be aware of, if you’re using multiple sample [inaudible 00:16:39] or other full screen [inaudible 00:16:41] stuff. It’s really expensive especially with regards to memory and you might find that that memory usage is better spent on high quality textures so you might try turning it off or just dialing down the multisampling factor you’re using.

Some other notes if you’re OpenGL full screen. Some of you are capturing the display and changing the resolution and we really prefer if you stop doing that. Instead, create a window that covers the entire screen and some of you may have been concerned about performance problems using a window instead of a full screen OpenGL context and we actually detect this case and go ahead and make sure your bits get to the screen as fast as possible so there’s no performance penalty for having a OpenGL full screen window as opposed to capturing the display. The other thing this lets us do is present critical system alerts. I mentioned this convertRectToBacking method. What do I mean when I say backing? Let’s talk about Backing Coordinate Systems. Backing Coordinate Systems are really what we were talking about, whenever we refer to bitmaps and you’re most likely going to deal with these in the [inaudible 00:17:42] world if you’re calling convertRectToBacking to convertRectToBacking. It’s a method that exists on NSView, NSWindow and NSScreen. NSView also has methods to convert points and sizes to backing.

Let’s discuss the specifics of that coordinate system a little bit. The first thing is that the units in the backing coordinate system are in pixels. I don’t think anyone’s surprised by that and it’s a standard [coco 00:18:04] coordinates of system orientation where the lower left is increasingly negative coordinates and as you go to increasingly positive coordinates you’re approaching the upper right. That means that you can say floor value to move it down or ceiling value to move it up and finally the integral values in the space are pixel aligned and you’ll notice I didn’t say anything about absolute coordinates and that’s because we don’t actually make any guarantees about what the local coordinate to backing coordinate transformation is going to look like. Specifically, don’t anticipate that just because your Bounds origin is 0 that you’re Bounds origin and backing coordinate is 00. The view can be rendered to a surface or a layer or have various flips involved on its way up to the window backing store so you’re going to see some weird coordinates in there like you’ll thrown in a positive view coordinate and get a very negative backing coordinate and don’t be surprised by that.

If you’re concerned about distances or relative to a certain point in your view, convert that point to backing also into your calculations and backing space and treat them as relative coordinates. Finally, the backing coordinate system is different for every view window and screens so if you use convertRectToBacking and are trying to roundtrip data; make sure you use the exact same object to call convertRectFromBacking on it. If you mix and match objects doing that you’re going to have really weird results. A few of you are here for Core Animation I’m sure. Let’s talk a bit about Core Animation now, especially if you’re managing custom CALayer content. Here’s some things you should know. One, [inaudible 00:19:35] layer bounds and position aren’t points. They’re virtual and to get results that are appropriate for display you need to be aware of the contentsProperty and contentsScale properties. If you’re getting the contentsScale wrong you’re most likely going to see unsatisfactory results. The other thing you should be aware of is if you’re playing with the contentsGravity property layer, that’s going to affect the positioning of the bitmaps within that layer. We’ll talk a little bit about that in a moment.

If you’re using the draw layer in context delegate method or sub classing draw and context, these would already include the scaling if you’re adjusting the content scale of the layer corrected. Provided you’re doing that you don’t have to modify any of your drawing codes so that’s kind of a handy think to know. Another thing it’s handy to know you can use NSImage as a contents of a layer. This is really convenient. We just add this one line snip it here and this works for multi resolution images.

Let me describe how we pick a representation using this case. Basically we go ahead and we look at the resolution of the screen, the layer tree is on. So if you’re on a 1x screen we’re going to pick the representation most suitable for 1x inside the image. If you’re in a 2x screen we’re going to pick a representation most suitable for 2x on the image and that probably covers 80% of cases you’ll care about. It’s really handy but there is some edge cases you should be aware if our doing fancier things and let me illustrate this for you.

We have our cake image again with 1x and 2x cake variance and here is a layer tree where we’re just displaying a centered layer here. On 1x we display the 1x image and on 2x we just display the 2x image, so far so good but let’s say we change the transform on layer, the bounds on the layer so it’s larger now. What we’re going to see is that the 1x screen even though we could use the 2x screen and get more pixels and display them t doesn’t happen and that’s because we don’t actually know the display size of the layer. We just know the resolution of the screen it’s on.

I mentioned content’s gravity. Content gravity is used in conjunction with content scaling, to a content scale to position the contents within the layer. By default the contents gravity is one of the resize modes which d9oesn’t affect the content scale or ignore the content scale once positioning it. if you’re using a normal size more like top left here you’ll notice that when we provide the 2x bitmap coral animation treats it like a 1x bitmap and displays it in a much larger area.

Instead of destroying the wrong number of pixels, we’re actually drawing things incorrectly and that’s true for various other non resize orientation like top right. Summarize those images. We can’t pick up transform and bound changes in the layers and additionally if you’re using NSImage as a layer contents you need to be sure our contents gravity is one of resize, resize aspect or resize fill. Now suppose you do want to take advantage of bound changes or transforms or you want a non resize contents gravity. We have API for you.

That API comes in the form of two methods on NSImage, recommended layer contents scale and layer contents for contents scale. Recommended layer contents scale you pass in a desire content scale, so let’s say you’re in a 2x window and you have a 3x transform attached to your layer, you would go ahead and say. Recommended layer contents scale six and NSImage will return the contents scale most appropriate for the image reps in that image.

If you’re doing the1x, 2x bitmap configuration in the that image we’ll go ahead and say, well two is way closer to six than one so we’re going to say two is the desire content scale you want to use. If it was a PDF image rep or something resolution dependent we would still probably return the factor you passed in and that can be used in conjunction with an ex method layer contents for contents scale. In that you pass in the contents scale you’re going to set on the layer and we give you back an opaque object to set as the contents of the layer and if you use those two in synch you can use all the contents gravity modes just to find in Core Animation.

I mentioned the contents scale and you’re probably wondering how do you know about that and how do you react to changes in that and at the layer level we have this delegate method called layer should inherit contents scale from window and it return the blue. If you were turned yes from this method we’re going to go ahead and update the contents scale of the layer to match the scale we’ve pass in this method and then call set needs to display on the layer. If you’re using the draw layer in context delegate method, this is a very natural pairing for you to automatically update the contents scale and redraw.

If you’re turn no we’re going to keep our hands off the layer and you can instead take this opportunity to update the contents and contents scale yourself. Something you should be aware of, if you do return yes from this you absolutely need to implement display layer or draw layer in context and the reason for this is that if you mark the layer as dirty, which will do when you return yes it blows away the existing contents on the layer, so if you don’t implement display layer or draw layer in context you’re going to notice the contents of your layer disappears instead of being updated for the resolution. Which is probably not the effect you’re going for.

The other bit to be aware is that this delegate method is invoked by Core Animation. It’s actually invoked by NSView so if you add a layer to an existing layer tree that’s hosted we don’t know about it and we can’t invoke that delegate method for you. The take away from that is when you’re creating these layers, you should probably take care to set the contents scale manually and you can do that by asking the window for its backing scale factor for example.

Let’s go ahead and see what taking this advice looks like. This is the unmodified version of the app. On the left is a layer that uses draw layer and context to draw and within that it uses NSImage draw and Rect method as well as NS Rect field with a blue color to make the blue background. The layer on the right is actually set by setting the background color of the CALayer setting the contents gravity to center and setting the contents to our NSImage.

It’s displaying more or less correctly but those are some really fuzzy cakes. If we go ahead and look at our cake at 2x PNG the first thing we’re going to do is we’re going to add it to our target. We’ve updated our art work, everything should look great and we can see the image on the left doesn’t look so great. It still pretty blurry and the image on the right is the different size. We’re getting very different results than what we had before and let me how we set up that layer. An application did finish launching here. The layer on the left, it’s pretty simple, we just set ourselves to the delegate and called set to display on ourselves there.

The one on the right does exactly as I said where it sets the background color of the contents gravity and it calls this configure content review two method which just sets the contents to an NSImage. I mentioned that a using layer should inherit contents scale so let’s go ahead and do that and let’s take this one layer at a time. We’re going to start by working with the layer on the left. Just add this delegate method here. We’re going to go ahead and run this. You’ll note nothing has changed. The image still looks just like it did before. The problem is that when we set the layer on this view it’s already posted in the window so we don’t get the notification that we’re adding a layer to it all and this method is never invoked.

The solution to that is to update the contents scale manually. I mentioned the window backing scale factor method and that’s something we can use here. We just add view one dot layer dot contents scale equals view one dot window back in scale factor and now we see a … well maybe it’s difficult to see out there but this is a much crisper 2x variation of the cake image. Let’s go ahead and start working on the right side layer. The one that’s using the layer properties to display itself.

I mentioned we’re using the contents gravity, contents gravity center which means we can’t set the image as the contents of the layer anymore. Let’s go ahead and update this method to using explicit contents scale. In this case we have our new configure contents review two method that takes the scale and it goes ahead and grabs the cake image again. Then ask for the recommended contents scale for that image and we go ahead and set that on the layer and then also passes to the layer contents for contents scale method and update the layer contents using the result of that.

We’ve changed our method name here. Just start with a 1x scale and see what happens. We’ve noticed that now we’re actually getting a pretty decent result. It’s still the 1x image; it’s not as sharp as it should be. We want to update this to be similar to our code here where we’re manually passing the scale factor. Instead of using view one dot window dot backing scale factor we’re going to use view two dot window dot backing scale factor. Now you can see we’re getting consistent drawing between the left and right layers again which is exactly what we wanted and is using our 2x resources.

That’s great, that’s what we’re targeting. I mentioned we’re handling this layer by layer. We added layer should inherit content scale but didn’t handle layer two in it and in this case it doesn’t make a difference because we’re not responding to a dynamic screen change but that’s something we will have to respond to in the real world so you would add code like this to notice, oh we’re talking about layer two. Let’s call configure contents review two again with the new scale factor passed into the delegate method and it’s very important that we return no here. If we were to return yes, we would blow away the contents we just set on the layer and undo all of our work.

Alright so we still using that a bit. Now I mentioned the should inherit contents scale method. You’ve probably picked up that resolutions can change. Let’s talk about that a bit more. The resolution can really change at any time. You can’t necessarily predict that things will hold constant. Some can hot plug an external display or mirror or extend the desktop. Just because the internal display is 2x doesn’t mean the external display are going to be 2x. You do have to deal with heritages environments where you have a 1x display and 2x display simultaneously.

In those cases, when a window is dragged between displays it’s going to update automatically. I’ll talk a bit more on how that window updates its resolution. It’s going to try to make it’s backing resolution match the back resolution of the associated NS screen and that means if you’re straddling displays it’s going to pick the screen with the largest part of the window is on. If you’re off screen entirely, we’re going to do something different, we’re going to use the resolution of the highest resolution display attached to the computer.

When we do the change the backing scale factor of the window, we post a notification about it and that notification is NSWindow did change a backing properties and that’s also called when the color space or a bit width depth of the window changes as well. If you’re using the window delegate, you can go ahead and implement window did change backing properties instead which will pass your notification as an argument. Views are similar.

We have new method on NSView called View did change backing properties. You can subclass that, there is no equivalent notification and that’s called when the view is added to a window or the window changes its backing resolution or the color space changes. Here is a snip it demonstrating what you might do in that method. Here we call super view to change backing properties and if you recall from the demo the new anasimag contents scale based methods we use that to specifically manage the contents and contents scale of the layer we’re hosting.

Something to be aware of, the properties, this method for one isn’t invoked when a view is removed from a window and additionally if you were to say convert Rect to backing on a view, not on a window. The view will act as if it’s on the highest resolution screen. That’s consistent on a NSWindow. The other bit is before you add a view to the window; it’s doing the same thing.

I would like to go ahead and bring up our resident text expert. Aki Inoue to talk about text rendering in high resolution.

[applause]

Aki: Thank you Chris. Good morning I’m Aki Inoue. I’m the text guy from the Coco group. Today I’m going to cover how you can achieve the best text quality for your application in high resolution world.

Let’s get started with exciting new stuff in mountain lion. Actually there is no new text system API. It won’t support high resolution. Why is that? The text system is designed as resolution diagnostics throughout and it’s been working no identity coordinating system for years. For example we’ve been working with zoom view in text edit or we’ve been rendering into totally resolution independent PDF files. There was no need to add new API but we’re introducing some significant behavior change in Mountain Lion.

Screen fonts; we’re deprecating the usage of screen fonts starting Mountain Lion. Let’s review what screen fonts are. The screen font is a variant of your base font and it uses integer glyph advances instead of the default floating point values. As you just see with the base font the width-ness of the character is in 14 point value. That’s taken from the font file itself. On the other hand with screen fonts the width-ness spacing is three so that the origin of the character is aligned to the interior position.

Remember in low resolution one point used to be typically one pixel so you get a fact of pixel aligned in a tractor in some cases. With this we were able to achieve predecent text quality in lower resolution displays and code could be able to use the same shape if not catching easily and it used to work with hand tune and ancient bitmap fonts. These days these advantage are getting less relevant because the newer course technology such as font smoothing, font pixel [inaudible 0:34:04.

Also the disadvantage of using screen fonts outweigh the advantages. For example of the rounding the grass are spaced uneven and the gab created this uneven spacing is more apparent or uglier when you’re running high resolution. Also we’re not using hanging or [inaudible 0:34:37] the higher, advanced typographic features with screen fonts because these feature are designed with floating point value in mind so they don’t work to well with integer advances.

Finally because of the tweaked width-ness of letter it often doesn’t have consistent scaling between point sizes. If you’re a flotation are using multi font sizes such graphics tools or presentation tools you might encounter surprise sometimes cause by this effect. As I mentioned before using the base floating point font gives you the best text quality both in low resolution and high resolution. Because of that these floating points advances are often referred as ideal advances and that give you taking advantage of the higher density in high resolution because we’re not rounding to pixel aligning.

We’re now enabling [inaudible 0:35:53] everywhere by default and this is how actually the font designer originally imagined and you have the uniform between point sizes and different coordinate systems. For those reasons we’ve been using the base fonts in many places. For example our system font, let’s Lucy the Ground has been using the floating point advances since the Mac OS 10, 10.0 and also may applications such as iWalks have been using floating point advances. Finally, in fact iOS itself doesn’t have the concept of the screen font all together.

Let’s take a look the screen shot. This is from 10.7. We’re showing how [inaudible 0:36:52] and Times from 12.2, 18 point. As you can see the line edges are pretty much jagged in some places and this is how it looks with 10.8, the lines are still smoothly and consistent between point sizes. Let’s zoom into some of the words. In 10.7 you might notice the glyphs are placed unevenly and especially between W and E the depth is pretty ugly. With 10.8 the glyphs are spaced evenly and the W and E are placed handsomely using coning.

Take a look at the actual text system API here. As you may know the Coco has three main groups of text rendering and measuring APIs. NS Lay Out Manager. One of the core text systems APIs along with text view and integer text storage provides the power and accessibility to the text engine and a single API such as drawing [inaudible 0:38:17] gives you convince way to render NS string efficiently and NS Cell renders many user interface control strings.

These APIs can be categorized into two groups further. One is document contents. NS lay out manager and other text system luminaries. NS text [inaudible 0:38:49] usually takes this burden to support a large document and user interface are usually, end up using NS string API and NS cell. For these groups we have the specific API for controlling the screen font setting or levy from 10.0. For NS lay out manager we have user screen font method. When it returns yes NS layout manger use screen font, if no it doesn’t. Similarly we have NS string drawing disable screen font substitution flag. It’s used for extended string drawing API such as draw with rect options. By specifying this method you can disable the substitution of screen font with these APIs.

With 10.7 use screen font was set to yes and for user interface on and setting it was default to be no so we were always using screen fonts and on 10.8 they were flipped so that the layout manager doesn’t use screen font and NS string drawing disable screen font substitution is implied. Typically you are using font object. You send a NS font factor message such font with name size and you get a base font and you pass that font object to one of the text system APIs and the [inaudible 0:40:41] takes over from there. You can use these font and measure and render text through these text system APIs. You’re application don’t see the screen font itself.

Behind the scene the text system swaps. The base font use specified, is corresponding screen font dynamically [inaudible 0:41:11] necessary. There are two APIs for that, NS layout manager substitute font for font and NS font screen font. Actually this is somewhat what substitute font for font in its layout manager implementation. It checks if it’s supposed to use screen fonts and if so it cause NS font screen font to substitute and get the screen font. Behind the text seems substitute font for font to self and get a screen font and use it dynamically for measure and render.

As I mentioned that because of those changes in Mountain Lion you can take advantage of the higher density in your text rendering by default but is there anything you need to do? Well in some cases you need to do something because the width-ness of the letters are different between screen font and idea fonts the number of characters that can fit into line changes and these changes are often you want to avoid because you want to have the same appearance of your document between releases.

If you’re application falls into this category you can manage your screen font setting or document using for example API such as NS layout manager use screen font and actually we’re introducing a new document attribute. NS use screen fonts document attribute that you can specify so that your power document use screen fonts setting can be stored into your document data and actually takes everything over the enhance to take advantage of new functionalities so you can look into the source example and adopt the same strategy in your application.

Also we’re introducing a new press key and it’s for the font for font substitution enable. It controls the default settings for the screen font usage. If you’re application is linked against 10.8 SDK it defaults no. That means you’re not using screen fonts and if you’re linking against pervious SDK it’s yes so we’re preserving the Lion behavior.

Using this key you can control the default screen usage in your application exclusively. Now I would like to switch over to Dan Schimpf who is going to discuss the inter casing of pixel aligning.

[Applause]

Dan: Thank you Aki. Okay I’m going to take about a couple of issues with aligning pixel based art. This most helpful with drawing new I controls like buttons and things. What are the issues that you may run across? Well there some situations in your drawing your iArt that may have worked well at 1x that would seem to fail at 2x. it would be out of line. This usually comes down to rounding differences causing some layout, again worked at 1x t change at 2x, just change their values. This is because there are no odd pixels at 2x because everything is doubled even with three points all of a sudden becomes six pixel so there is no odd pixel values.

Here are a couple of examples of things that actually worked. I have a four point tall space and I want to fit something that’s two points big inside of it and you can see at 1x it works out fine, we can center it just fine and at 2x the blue bars is in the same spot. My one point turns into two pixels and it’s just okay, yey. Same thing works for odd inside odd as well. We center it. one point turns into one pixel, one point turns into two pixels, everything is happy.

The trouble begins when we have even things inside an odd space. We try to center it at 1x, its 1.5 points so we round it. That goes up to two pixels to have a good appearance but at 2x where we have enough pixels where we can actually put it where it belongs. While it’s more technically correct it’s all over sudden in different spot, it’s shifted down a bit and maybe you’ve already counted for that rounding up in your design at 1x and now it looks wrong. Just for completeness here is an example of an odd size item inside an even size space, it shifts down again.

What do we do, how do we see these things? The easiest way to see these things really is to just look at it. I mean you can do a lot of math but testing is going to be your key here. If you had two displays it’s really the easiest. If you can set one at 1x and one at 2x then you can just drag your window back and forth between the two displays and see how it changes as you move them. The window switches as you said before when the window hits the midpoint of the screen divide … when the window has more of its area on the other screen, the window rebuilds to 2x.

That’s how you can observe any visual shifts. Things should stay at the same spot and at 2x and 1x and your eye can really pick those difference out. You want to have one display. Don’ worry you can still do this. Take screen shots in both modes and then open them up in preview or any sort of other image view where you can line them up in the same spot on screen and scale the 1x screenshot so you can … again they occupy the same about of area on screen and then just flip back and forth and then you can see those visual shifts.

Here is an example of a pixel shift. This is a button with a glyph inside of it. Here is at 1x but obviously scaled up big so you can see it. I’m going to flip it to 2x. It settled but it moved, it moved up. I’m going to flip back and forth a bit and you can see it. If you’re just setting out of 2x you may not see these sort things but it means your layout is wrong, your view eye isn’t correct and especially if your user flips back and forth they are going to notice these kinds of things. Where is the problem here?

Well this time it might actually be the design. If you can redesign the 1x appearance to eliminate these odd inside of even sort of situations that’s really the best thing to do because then you get to have correct math at 2x as well. If you can’t change that because of legacy concerns or historical reasons or maybe you just like it better you might have to introduce 2x specific code just to handle this. You can experiment with the rounding direction. There is a method backingAlignRect:options. You pass it in erect and it gives back erect that’s back aligned in the backing coordinates.

It doesn’t change coordinates spaces but it’s a line on a good pixel grind and the options that … flag that you pass in permits explicitly control about how [inaudible 0:49:16] in each direction. If that doesn’t work you may have to add a half point or one pixel if you’ve got in pixel space explicitly when running that 2x but as you might tell this is a little fragile and not exactly the cleanest code in the world so really do this only if absolutely necessary.

Scale factors. We’ve given you a couple of cases in this session alone about where scale factors are important but we want to caution you from over using scale factors. Some background on coordinate spaces first. We talked a little about the backing coordinate spaces but there are some other ones. Cocoa actually has a lot of them. NSWindow, NSView, CALayer Bitmap contexts that you may have and OpenGL contexts. They each have their own coordinate space and to talk about a positions you have to covert between one of the other and it may seem a lot of work but dealing in the correct coordinate space your code actually stays clearer.

The best part is the scale factor is already accounted for in all of these contexts. When you want to convert things from one area to another I’ll say you’ve got a view, you want to go to a different view use convertRect:toView and is also convert point and convert size. If you want to have a view go to a window coordinate, you use convertRect:toView again but you pass in a nil view and that will get you the base window coordinates. This is also when have something like an NSEvent location window. You have to point to an NSEvent, that’s in window base coordinates so you can convert that from the nil view to get your point back into your views coordinate space.

If you have an NSWindow and you want to take something like a point or rect to the NSScreen that it’s on you can use convertRectToScreen and NSView to its hosted CALayer you can use conertRectToLayer. Again we mentioned this before but if you have anything and you want to take it back to its backing coordinate space you can use convertRectToBacking. Where don’t you want to use scale factors? Let’s say I’m drawing something, I’m in my draw rect and I really want to get something on a pixel boundary or pixel coordinates.

Well I can just take my frame or my bounds and get my window scale factor and do some math myself but this is not actually going to work in a couple of cases. If you’re in a layer or you’re in some kind context, if you had some other scale applied and it’s also going to dependant on the window that you’re drawing into. The better thing to do is again you use these convert rect to backing calls and that will give you your pixel origin. This is part of your windows scale factor is not usually the best case.

Some tips for first scale factors. Work in points wherever it’s possible. It makes you code cleaner and simpler and be prepared for fractional points and positions at 2x because they’re okay. If you are at 3.5 points, that’s actually seven pixels. You’re still aligned, you are okay. You want to convert any coordinates and sizes that you used to the appropriate space before using them. Even NSView to NSView. Even if they’re in the same parent view because they may have special view, specific transforms.

If you can get by don’t ask for the currently scale factor but if you absolutely need to again some of the cases that we discussed in this advance session, use your current window or the tightest contacts you can. If you don’t have a window or a screen that you can use, you can ask for its current scale factor. It might be time to rethink your design because again some cases where you have 1x and 2x display, what is the right answer there. It’s hard to say. Okay and I’m going to hand it back to Patrick here. He is going to talk about on screen content.

Patrick: Thank you Dan. Alright so we talked about capturing on screen content and what I’m talking about really here is creating images of your application user interface and why are we talking about this in the context of high resolution? Well it’s a place where you’re actually creating … you managing pixels themselves. Some typical cases where these might come up is, you want to use … you want a temporarily cache. Some of your application is drawing especially if it’s expensive if it’s expensive purpose for use as a contents for a transactional animation.

That you want to be as fluid and smooth as possible and can’t afford to draw every frame individually. Also drag images is another case. Typically you might want to draw … take your existing content and capture it. In this cases as I mentioned a moment before, you’re creating bitmaps in directly. You’re creating pixels and so now you have to care about scale factor, high resolution and scaling in a more explicit way. When you’re capturing on screen content the important the important thing to remember is depending on your goal you might need to use and in fact you will need to use different techniques and different APIs to achieve that goal. So some of the comments for example there is Windows and Views versus display content.

I’m going to cover both of those cases and show you two different techniques for those. Okay so first let’s talk about capturing View Hierarchies. This is more a standard case, this is like you have like a particular View Hierarchy within your window and you want to draw into an off screen bitmap or capture it for use in something like an animation. Three easy steps. First you need o create a bitmap drawing destination to act as a backing store of that capture. We have a method of [inaudible 0:55:03] to help you with this and these are not new by the way.

In fact I just want to mention for a moment that every API that we’ve mentioned in this talk besides the ones that we’ve explicitly called out are available in 10.74 and later. There is only very few things that are new for 10.8. BitmapImageRepForCachingDisplayInRect is something that will give a suitable bitmap image rep destination to act as a backing store. Then you go ahead and take your view and you just draw it into that backing store by using another NSView method called cacheDisplayInRect to beat my boom trouble or you provide that as I mentioned bitmapImageRep.

Thirdly and not to be forget, creating a NSImage and more importantly create an appropriate a high resolution compatible in this image from that bitmapImageRep. The key detail here is that it’s important to actually initialize the NSImage with these size and points of that capture. You need to know at that point … should be the equivalent to the size and points of the rect that you captured with originally.

Some things that you should be aware in these context, the resolution of these captured image will match the original window that they were originally taken from so if you ask that particular view that’s attached to a particular bitmap rep drawing destination, it will be created with the same backing scale as the window that view is contained in. If you’re doing for the view that’s not currently attached to a window or if it’s in a window that’s currently off screen that’s to say it’s positioned off any one of the online available displays often space somewhere, the resolution of that capture will match the highest available screen .

This might look familiar to you or my colleague Chris Dreessen was talking about this, this is standard [inaudible 0:56:43] we use if we don’t know what actual screen you’re on. Another thing to keep in mind is this NSImage contrary to all the other NSImage we’ve been talking about, been encouraging users only is going to be a single resolution because you only credit one capture, right? If you actually need to hold on to this in for a while and use multiple context, you night need to be aware of that and appropriately add the low and high resolution versions of that capture as necessary.

Now let’s talk briefly about capturing screen content. What I mean about screen content is stuff that’s already there on the framework for on display, you just want to sort of screen grab a portion of that and that may not be your own application’s content, you just want to grab content from the desktop. Because this is talking about the entire desktop you need to use [inaudible 0:57:27] displays services APIs. You need to go beyond the bounds what app can provide for you.

There is an API for doing this. There is actually a whole family of APIs. I’m going to focus on a particular one here which is CGDisplayCreateImageForRect. You pass the display ID in and you pass the Rect in points. Remember Global Unified Coordinate System in point’s always low and high resolution. Now the difference here and this is the important point is you need to calculate that image size because it’s a CGImage you’re getting back that is not going to its size. It’s width and height are not going to be in points so rather it’s going to be an image space or pixels. In order to create an appropriate in this image with CGImage you actually need to convert from backing to compute that size in points and you need to do that with the screen object that you captured from in order you get the right result so that you get whether it’s a 1x screen or a 2x screen, you take that into account appropriately.

Here is a little tip. If you saw the CGDisplay ID, well how do you get there from map kid? Well it turns out if you ask the inner screen for its device description; ask that for an object of a key and that screen number it turns out that is magically very usefully the CG Direct display ID. So you can use that to sort of connect the dots and the sequences of APIs.

Okay, now talking a little bit about our performance under high resolution. I’m not going to needy, greedy core techniques, I’m really just going to talk about the philosophy of your performance under high resolutions and give you some sort of general parameters here. First and foremost the thing to remember with this particular products especially, is your application is going to be processing for as many as seven times the amount of pixels under high resolution. The 7x of course comes from these wonderful new expended desktop high resolution down scaling modes that you can get to.

Where it goes up to 1920 by 1200 at 2x. That’s a lot of pixels and you might come away from that thinking how can anything ever possibly be fast if you have that much more. It’s almost an order of [inaudible 0:59:27] more pixel content, what’s going to go on? Well I would like to give you the message, don’t despair the hardware. The hardware especially on these new products is more than capable of handling it. As it can be evidence if you have a chance to play with it even the most aggressive system animation that we do and scowling and a lot of dynamic behaviors. It’s more able to handle that without even breaking a sweat and getting out of low power states or anything like that.

In particular the key thing we’ve discovered while developing this product is that most often performance problems are not because you’re hitting some fundamental hardware limitation, obviously I want to say here there are cases where if like a game or really aggressively focused on GPU programming well obviously the hardware is going to be your bottleneck and there are special consideration for that but if you’re just a regular Cocoa application, it’s typically not the case that you’re fundamentally limited by the hardware.

Most important is to make sure you’re application actually leverages the system graphics technologies as much as possible. Make sure that when you’re profiling your application that you’re spending most of your time asking the system to do work for you rather than like waiting for one of your threads to give response to the other one and just waiting around a lot. Typically we’ve looked at a lot of applications and it’s usually some sort of choreography problem rather than fundamentally hardware limitation.

Another one I want to point out on the topic of performance is be aware of time space trade-offs because of this significantly larger amount of pixels that your application is processing some cache strategies that may have been advantageous under standard resolution may no longer be advantageous under high resolution because now that you have cache both 1x and 2x to b able to handle any possible display. It may now and especially with the fancier hardware going on in here. It may actually be more advantageous to just render all the time and stop caching. It’s a pretty big change and you may want to revisit the assumptions of some of your caching strategies and make sure it still fits.

Apple, Inc. AAPL
1 Infinite Loop Cupertino CA 95014 US