Introduction to High Resolution on OS X 

Session 213 WWDC 2012

Give your users the best experience by taking advantage of High Resolution on OS X. Learn how to move your applications to High Resolution, create crisp application assets, and find out how to avoid common pitfalls.

Dan: Good morning. Welcome to Session 213, Introduction to High Resolution on OS X. My name is Dan Schimpf. I’m an engineer on the Cocoa [inaudible 00:00:19] team. I’d just like to start with a little bit of an introduction. As you all know 2X Retina displays arrive first with the iPhone 4. On the Mac side we’ve only really had one scale factor to deal with for a long time. There’s been many different screen sizes, of course, but only one scale factor for the pixels on screen. That’s of course until now. The good part is OS X takes care of a lot of the details for you in your apps, but full adoption for your applications will require some work, but it’s going to be worth, Trust us. Here’s what we’re going to be talking about today. Recent review of what High Resolution is for the Mac. Then we’re going to talk about some things you need to do to optimize your application for High Resolution. Talk about what to do with your bitmap images. Talk about your icons and the finder and your document icons. Talk about some final touches to make your application really the best on the new hardware and some problems you may come into while you’re optimizing your application. How does High Resolution work? There are new High Resolution display modes for the retina displays on the new hardware. The screens and the windows now have a 4 to 1 pixel per point density and the frameworks provide the scaling between the !X and the 2X operation for your windows and screens. The Quartz Window Manager, which [inaudible 00:01:56] of the windows and the display on the screen it insures the consistent presentation of these windows across multiple displays of different scale factors. What is High Resolution? What are we talking about here? Well with High Resolution the number of pixels on the screen actually quadruples. You see 2X, but you actually get four times as many pixels. That’s four pixels on screen for each point. Well, what is a point? Point is a unit of measurement. It’s commonly referred to as 1/72 of an inch although on actual computers monitors this can vary. What this leads to is more pixels to display your text. Sharper text and more detailed graphics as we said. Let’s do a quick demonstration. This is a point on screen. There are many points like it ,but this one is mine. At one 1X that point corresponds to one pixel. That pixel maybe big or small, but it’s one point equals one pixel. Add 2X we have the same point, but that corresponds to four pixels in the same area. How does this compare to iOS, which many maybe familiar with? Well it’s actually very similar. The basic tenets are the same between the two platforms. They both have an integral UI scale factor, 1X and 2X and they both require a lot of automatic scaling from the application frameworks, but there are some enhanced functionalities for the unique needs of OS X. Particular in the realm of multiple displays. If you have multiple attached one can be at 1X and the other one can be at 2X, depending on the capabilities of that hardware. There’s unique functionality in OS X to handle this. This corresponds to windows versus screens because different screens have different scale factors, that means different windows can have different scale factors as well. The Mac has to be more dynamic in terms of the resolution changes. Displays coming and going, user changing the settings as well. More pixels on screen means we can do more with them. If there’s a yellow blob on screen at 1X. You can see there’s lots of fuzziness on the edges, but if we take ti to 2X we can show finer details and with the same, let’s say the same, shape, the same mathematical shape corresponds to a sharper detail in that shape. Here we are at 1X again. We have 16 by 8, 16 pixels, 16 points width, but we go to 2X all of a sudden we have 32 pixels for the same number of 16 points. How does this work? On OS X we’ve got a unified coordinate system. That means all coordinates are on points. This means view frames, window locations, screen sizes, all of these things are on points and because different displays can be a different scale factors it can differ by screen and the window, but the good news is for most part your apps don’t need to care about this. You just pay attention to what we’re telling you and your applications will probably work for the most part without any changes. Here’s two screens hooked up to your system. They both have the same number of points, but because one is running at 2X they both show up in the same number of points. It turns out the 2X one actually has two times as many pixels on each dimension. They come across as the same logical size. How does this work with your current apps right now? Well by default Cocoa apps are scaled automatically. That means you get sharp text and crisp aqua graphics for free. Well, except your custom bitmaps that you have they’re not going to be optimized. We can’t invent pixels that aren’t there. They’re just going to be magnified. By default carbon apps are magnified and this means that the text and UI widgets are magnified as well and made into them to be unoptimized. Here’s what you need to do with your app to make them ready for High Resolution. First you’re going to go through all of your 1X bitmaps and you’re going to need to add 2X representations of all of them. Then you’re going to need to a High Resolution icons for use in the finder. These are your application icons and your document icons. Then you need to look through your code for any uses of deprecated API, things that may have been hanging around maybe a bit too long and now’s a good time to switch to a more modern equivalent because it turns out these modern methods are more capable of handling things. They have more information. We can make better decisions about them. While your doing that also look through and see if you’re making any bad assumptions about pixels. We’ve long said that one point does not equal one pixel and now is the time when we’re making it real. Perhaps the best thing to do is just give it a try and if you get your hands on some hardware you can try it out there, but you can also try it on your home system. You don’t need to wait for your hardware to show up and you’d be surprised how well it works. Here’s how to test it at home before your hardware shows up. If you haven’t already installed a graphics tools for X code because you’re going to want to open Quartz to bug and open the UI Resolution window from the menu bar. You’re going to select HiDPI display modes. What that’s going to do is after you log out and log back in you can use the display systems preferences thing to select a HiDPI display mode. This is an example from a laptop which would be small if you don’t have the nice new hardware, but you can still use it to test. Okay. I’m going to hand it over to my colleague Patrick who’s going to talk more about what to do with your artwork. Patrick: Thank you Dan. My name is Patrick Heynen and I’m here to tell you about artwork. Why are we talking about artwork? Well, at the end o f the day with the retina display it’s really all about pixels, isn’t it? Well executed designs really can have an extraordinary impact on your application and how well it looks. Let’s take a look at some case studies here of standard resolution versus High Resolution just to give you an idea of what the difference really is and what retina really gives you for your application. This is final cut Pro 10 and what we’re looking at here is a standard resolution screen shot and this the retina resolution screen shot. We’re going to go back and forward. Now what I’d like to call attention to here is that it’s not just a standard upscale here, but the opportunity here is to actually add significant amounts of detail to your application graphics and really put a whole different look and feel onto your product. For example I’d like to call attention to the camera icon there. With standard resolution there’s really not much going on there, a few little details, but at High Resolution there’s a whole lot more detail. There’s also other subtle things you can do to your design on retina display that are just simply not possible at standard resolution. Let’s look at another example here, this is reminders. This is reminders at standard resolution and at retina resolution. Standard resolution, retina resolution. I’d like to call attention to the finer detail on the paper texture and some of the subtle highlights and in gradients that are much more able to be rendered in a much more higher quality fashion at the retina resolution level. What do you need to do to achieve these kind of results? Well, graphics resources simply need to be created at twice the pixel density and you just need to integrate these new apps to X image resources into your project and you’re done. It’s really that easy. There is not really, in normal cases, it’s well behaved Cocoa applications you really do not need to write any codes to integrate these High Resolution artwork axis. It’s real easy, but a few things to keep in mind here. Up-resing large quantities of graphics in applications that feature a lot of graphic [inaudible 00:10:35] can be a very challenging design task and I want to emphasize here for those of you who may be involved in design task as well. This is not to be under estimated, this portion of the project. In fact, it has been our experience that artwork related task can typically consume over 50% of the overall effort of making your application optimized for the retina display. It’s not just a bunch of image files. There’s a huge amount of work. There’s a lot of organization that has to go into it and good communication between designer and developers are essential to making this process run smoothly. Otherwise it can be death by a thousand paper cuts. Let’s get more into the technical [inaudible 00:11:11]. what kind of image resource categories do you have to worry about for High Resolution? Well there’s really three. A bitmap, a vector, and application and document icons. I’m going to talk about each of these and some of the unique considerations for each of these under the retina display. Bitmap image resources. This is perhaps the most common category of image resources and here the most important thing to know about is that there is a file naming convention that we have taken from iOS, which is the @2X file name suffix. It works very simply and straight forwardly. You have your standard resolution axis which it’s just a normal file name and then you take that same file name, add the @2X suffix and that is how you indicate to the system that this is a double resolution retina axis. Some details about this. The @2X axis differ from the standard resolution axis, but they have exactly twice as many pixels and they should be exactly twice the width and height in pixels. Of course, the thing to keep in mind is on actual display these things are going to be the same physical size on a retina display as their standard resolution counterparts. This is frequently forgotten, but is an essential point that I’m going to be circling around in a bunch of different ways. Here once again, this is looking at things pixel for pixel, but, of course, on the actual retina display point for point that’s what it’s going to look like. How does this look like in code? How do you achieve success? How do you get this magic automatic behavior where you do have to write a single line of code? Well it does help if you’ve already written the write code to begin with. This image will automatically locate and use the High Resolution 2X image representations if you use the image look up by name constructors for NSImage. This image also supports multi resolution TIFFs, which you can generate automatically using X codes, combine High Resolution images project build setting and that presents some slight efficiencies in file system and run time performance as well. That’s another option you have available to you. How does NSImage help you in this High Resolution world? Well, the magic thing about NSImage is that it actually is able to handle rendering of an image at any resolution, both standard resolution and High Resolution. This is actually critically important for the Mac compared to iOS because on the Mac you have multiple displays with multiple possible resolutions and an NSImage because of it’s ability to support multiple representations, both standard and High Resolution, in the same NSImage object instance it can be ready for duty in any context, in any resolution context. Once again, you need to make sure you use them by name look up to get this automatic behavior where it finds the appropriately name file name resources. There’s either the classic NSImage named or the new NS bundle category method we introduced this in 10.7 line, which allows you to do the same by name look up from any arbitrary in its bundle, not just the main app bundle. Keep in mind with NSImage the size of the image, the size of the NSImage instance that is, is always in points, but the pixel geometry the individual representations may actually vary. In fact, for proper operation the second representations should always be ideally twice the pixel geometry of the first. Then at draw time what NSImage does is it actually chooses the best representation to draw it based on the characteristics of the rendering destination. It computes how many pixels it’s going to draw into and then looks at the representations it has and picks the one that’s going to be the best match. In a High Resolution scenario in a 32 by 32 point destination rectangle it’s going to pick the 2X 64 by 64 pixel representation. In contrast both CGImage and UIImage for that matter have a geometry that’s always in pixels. Actually, CGImage is always in pixels, UIImage is slightly different and it’s only limited to one resolution so if you’re working with CGImage directly you have to do some of this management yourself and be aware of the difference there. Okay. The next image category, vector image resources. Here we’re really at the end of the day talking about PDF and the nice thing about PDF is that they scale automatically, both to any user size, that is the size on screen, as well to any pixel density. In some ways they’re ideal for High Resolution. They do have some limitations in terms of the graphical construction and visual complexity that you can design. We typically recommend that they be used for simply elements such as button images like what you see here on screen is the shared GIF which is actually a PDF in the system. It’s best when combined with AppKit template image rendering to give you ideal visual effects at High Resolution. Let me talk about template images. Template images are really about getting as much bang for your buck out of a single piece of artwork. The way template images work is we just treat the image as a shape and their typically indicated by either having the file name suffix end in template or you can mark an image as a template by saying NSImage set template yes. What happens is a draw time AppKit actually takes that shape that you provide and uses some image effects to actually provide context specific appearance appropriate to the current state of the control or the window. For example, you automatically get pressed, disabled, inactive, rollover appearances. All that stuff is provided automatically and matches the look and feel of a system. Most importantly this effects rendering and the restoration of your shape is done at full backing resolution, which means you automatically get high quality look on both standard and High Resolution. Okay. That covers vector imagery resources. Now let’s talk about application and document icons. You may have encountered these before. These are the traditional backed by ICNS files and they’re used by finder, open safe panels, spotlight menu, a couple of other areas. These are really the way your application advertises itself to the rest of the system. You don’t typically see these resources in your app, but they’re typical consumed by other Apple software, other pieces of software outside of your app. They’re an important part of the overall imagery of your app. Now they’re special because ICNS files and icons in general are the only sort of image category that have a variable user size. They have the magical property that they can be rendered any size and they’re suppose to have a high quality result at almost any size that you draw them in to. How does it do this? Well, ICNS, as you may know, have five slots for providing individual bitmap representation, 16 by 16, 32 by 32, 128, 256, and 512. then what happens at draw time very similar to that way I describe NSImage working, it just chooses the most appropriate pixel slot based on how many pixels it’s about to draw in to. For example if a High Resolution we’re drawing into 128 point rectangle, which really is 256 pixels. It’ll just go and find that 256 pixel slot, take that representation and draw it. What’s the problem? Why am I even blathering on about this? Well, there is a problem unfortunately. Retina displays are different. Let’s go into detail here what I mean here. Let’s look at a 16 by 16 icon. All right, at 2X that’s what you really want to draw, 16 by 16 icon. That’s for example, list, view, and finder or any open panel, you really need 32 pixels. Of course it’s going to be the same physical size when rendered point for point. That’s that distinction I eluded to earlier. You can see there’s already a dramatic different amount of detail, but now here’s the key point. There’s two key points I’m going to talk about here and I’m going to spend a little bit of time on them. First off, icons with the same pixel count, but different target resolution, by target resolution I really mean target pixel density, may need different visual treatment because of the display size. Since these are not just the same, these are the same exact amount of pixels, but because on a retina display things are so much physically smaller you may from a graphical design perspective you will probably find yourself needing to use different heuristics to get the same perceptual effect. You might want to, for example strengthen it, emphasize strokes, use slightly more saturated colors, or just use a slightly different graphical treatment than you would at the same physical size on a standard resolution display just because the perceptual characteristics of the retina display are different. For example, this just shows the same icons with the same pixel count, but just blown up. You can see how they’re really dramatically different. Okay, the second point that presents the problem for the traditional way of icons operation is that a standard resolution icon on a larger size may not always serve as an appropriate choice for a High Resolution icon at a smaller size because the image content may actually be different. At small icon size it’s very common for icon designs to actually choose a totally different simpler information design because when you’re trying to get a concept across in a tiny space like a 16 by 16 thing in a list view you may want to choose a totally different representation than when you’re doing something huge at 256 by 256. For example here in our famous little fish bike thing at a larger size I might want to put the fish on a bicycle, but a smaller size I don’t have enough room to really convey the concept of a fish on a bicycle. I may just want to reduce it to a fish. The problem with our previous design of just aggressively taking the next larger pixel count size is going to be flawed because it might actually pick something with the wrong symbology which means as I drag across a 1X or 2X screen, as I drag a window across and the resolution changes I may actually get the fish with the bike, without the bike, wish with the bike. It’s just not good. How do we design the perfect icon to handle this problem? Well it turns out what we really need is we need a full compliment of representations for both 1X and 2X at every one of the size slots, where the size slots are really the sizes in point. The advantage of this is that it’s canonicalized very similar to the way that standard image resources work. Here’s the same grid pixel for pixel, which shows you the relative scale of these things and what I’d like to call attention to here is the additional amount of detail that is provided in the larger icons compared to the smaller ones. Of course, point for point this is how they would compare if actually viewed on this equivalent physical displays. Okay. What have we done to actually make this happen? Two things. The first is we have enhanced the ICNS format to support 2X variants for every user size slot and by the way everything I mentioned here that’s got the new badge. It’s actually new intense of four and mountain lion, it’s not just limited to mountain lion, starting with the new MacBook Pro retina. Under High Resolution these @2X variants are prioritized when you actually draw into a retina display or a High Resolution window backing store. Some of you who have been following this story of High Resolution over the years would be interested to know that the 1024 by 1024 size that we introduced last year is now been just relabeled. It’s now the 512 by 512 @2X and the 1024 size slot itself is effectively deprecated. Note, not every slot in this full matrix needs to be populated for the icon run time machinery to work correctly, but having all of the 2X counterparts is recommended as a practical measure because once you go to the effort of doing your optimization for retina its’ really just simpler just to provide them all. That’s been our experience at the very least. Okay next. How do you actually create one of these image resources with this crazy new format? Well, you may have used icon composer before and it’s got the work flow of building an icon one image well at a time. You get your graphics from wherever and then you got to go drag them into all these image wells. Well we’ve decided that’s just fundamentally broken. That whole work flow. Icon composer is effectively deprecated as of now and there was much rejoicing. What have we done to replace it? Well, we’re introducing something called icon sets. Icon sets are a new icon art delivery format for High Resolution. What it really is it’s really actually quite simple, it’s almost too simple. It’s just a folder and file naming convention. It’s a folder and it has to have a suffix called the icon set and what it contains inside of it are a whole bunch of PNG files and they look a little familiar and they should. They’re just like standard bitmap graphic resources except there’s one additional twist, they have encoded into the file name, not just whether they’re a standard resolution and a High Resolution, but also what size slot they can form to. There’s a standard naming convention for each of these files organized into a folder with the icon set extension and this is the way that we expect you to integrate your artwork into your project sources. What are the features and benefits of icon sets? Well, first and foremost they are a much better fit with existing design work flows. I’m going to demonstrate this in a second, but really given that designers are working in some random image authoring app to actually create these things, what they know how to do really well and reliably, which is very important, is create a bunch of bitmap images like PNG files. What nobody likes to do is have to then use a different tool to transform what their master file is into what they actually have to integrate into the project sources. This gets rid of that, eliminates that middle man and it just allows designers to create exactly what needs to be integrated into project sources. Also, not to be underestimated much more reliable color management because, again, there’s less transformation going on between the icon art deliveries and what happens on its way to the software. Speaking of on its way into the software the magic thing about icon sets is we’ve added support in X code to automatically transform these into compliant ICNS files at project build time. X code 44 and 45 have been augmented to understand icon sets as a basic graphic resource and can automatically transform these just like sort of the way ping files and actuate pings can get transformed automatically to TIFFs. It also knows how to convert icon sets into ICNS’. We’ve also provided a quick look plug in for verification of icon sets. For example, if you’re a designer you want to make sure that what you just generated is actually working correctly and is properly registered and aligned. You can just hit the space bar and finder on the folder and up pops a LOUI with a slider. I’ll demonstrate that in a second. Also provided a command line tool, iconutil ships as part of the base system as of 10-7-04, which allows you to go back and forth between ICNS files and icon sets, convert back and forth, very simple, doesn’t do much more, but it’s quite handy. Also, one last little thing we added to icon composer in case you’ve lost your original sources to icons and you need a way to boot strap the process. We’ve added a export to icon set feature to icon composer and that’s available in the graphics tools image for X code 44 and later. Okay, that’s enough talking let’s do some demoing. What we’ve got here is we’ve got an actual MacBook Pro retina display. The first thing I’m going to demonstrate is the process of adding High Resolution artwork to your application. We’ve got a wonderful really mission critical application here called fish bike and what I’d like to call your attention to here is that it has pretty low resolution graphics. What is up with that? It doesn’t do much. It plays a few sounds if the sound levels were up, but that’s okay. We’ll try that again when we add our High Resolution artwork. I happen to have gotten an artwork delivery from my designer and look there it is. There’s the bike and the fish with the appropriate file names and I’m going to add it to my resources folder. I’m going to copy them into place and I’m going to rebuild and viola. Woo, it’s that simple. There is no step three. Let’s see if the sound works. Oh yeah. That’s a fish sound if I ever heard one and that’s a bike sound if I ever heard one. It’s a really important app. Okay. That’s the process of adding artwork. Now I’m still missing an icon for my application. What do I do about that? Well, my designer gave me a folder called new icon, what’s in here? Oh my goodness, he gave me his source files. He was a bit lazy. He didn’t create an icon set. Well, let’s go figure out how that’s actually done. What I have here is I have a template file which is sort of a typical, you don’t have to use this kind of work flow, but this is a very typical design work flow where you have a single photoshop template with all of your carious size in it. Over here are all the standard resolution representations and here are all the 2X representations. You’ll notice that the image is all sliced up. Every one of the representations has a little slice around it. What that allows me to do is if I use the safer web feature I can use this little setting export that I created myself and all this does is make sure that it places them all into a folder called myicon.iconset. It doesn’t do anything fancy beyond that. Create those out and let’s go back into finder and look there it is. If I do that, oh well there’s the quick look plug in, but look I made a mistake. I actually kept the background in there so let’s fix that. Let’s turn off the background. There we go and now let’s try that again. You notice here, you can sort of see here implicitly all the names of the files. The file names are actually … the little missing link I should have mentioned before this is using the photoshop feature where the file names are encoded as slice names so that when you do use the safer web it automatically creates these files. Let’s take a look again and perfect, fully transpiring icon. Let me just demo it here. This is the quick look plug in that gives you the little slider that let’s you exercise all the different sizes of the icons. Actually if I had a multiple display set up here, if I had say this display hooked up to this MacBook Pro and I dragged this window across to it it would actually show the standard resolution. What were looking at here are all the 2X reps. If I were to present that window on the standard resolution display it would show me the 1X rep. It’s a good way to debug your icon. Well, how do we add it to our app? Well, first let’s name it something appropriate and now let’s go back to our fish bike app and let’s add it to the resources here. Let’s rebuild and I don’t know if you’ve noticed that, but I’ll bring it up here. We have a beautiful icon already built. Built automatically by X code. Just because it’s fun I’ll play the sounds again. That’s definitely a fish sound. That’s definitely a bike sound. All right. I think we’re done. We’ve got a perfectly optimized High Resolution app for the MacBook Pro retina display. All right. That’s the end of my demo. I’m going to hand it back over to my colleague Dan Schimpf. He’s going to talk about some final touches you can put on your apps and some considerations. There you go Dan. Dan: Thank you Patrick. Thank you. Okay. What are some final things you’re going to need to know about as you work on this? Well, as we said before the display scale is dynamic. It can change. The user is allowed to change it at any time. They can attach a different display. They can attach a projector that is a different scale factor. They can drag a window from one display to another, that’s changing its scale factor. Here’s a quick demonstration of how that works. If I got a 1X display and a 2X display right next to each other and I’ve got a window straddled between both of them. As you can see the window is labeled 1X right now. That’s because it’s primarily on the 1X side. The display that owns more of the window wins in this case. What happens when the user say drags it over to the 2X side? Well now more than 50% of that window is on a 2X display. The window automatically rebuilds itself into a 2X backing store. How do you do this? How do you react to display changes? Well, you don’t need to do much. In fact, you may not need to do anything at all. This window automatically redraws itself when it switches scale factor. That means that any content that you have that is dynamic automatically works. If you’re caching bitmap drawing you may have to do some extra work. If preferably don’t cache any drawing at a specific scale factor, but if you do what you have to do is invalidate that cache as they react to the change when it happens. Let’s talk a little bit about defecated API. You may still be using, but it’s going to cause you some trouble when moving to High Resolution. ConvertRectToBase. It turns out this method had two uses in the past and now in the High Resolution world those two different uses have two different answers. We deprecated convertRectToBase and we’re replacing ti with two methods for each of those two uses. ConvertRectToLayer will deal with converting a rectangle or there’s also corresponding size and point variants. You can convert those coordinates from a view to its layer and then there’s convertRectToView, which has been around for a long time with a nil view to get the window coordinates. INSImage compositeToPoint. Also dissolve the point. This has been long deprecated and now it will this is the time when you really want to get rid of these calls because the will have funny behaviors at High Resolution in some cases. We recommend using the INSImage methods that begin with draw. If you’re going to draw an image look for method beginning with draw and you should be all set. INSScreen userSpaceScaleFactor is tied to the pre-template seven model of High Resolution interface and it will always return one now because in effect to your code that should be how it works. Quick Draw, it’s only limited to 32 bit anyway, but take special note if you’re using things that might use Quick Draw like INSMovieView you’ll be affected here. Time to get off of Quick Draw especially for High Resolution. If you’re caching those bitmap drawing you may have to take some special considerations to deal with High Resolution accurately. The preferred way to do this is to create an NSBitmapImageRep and then use an NS graphics context to draw it into. The benefits of this is that it’s automatically scaled to the correct scale factor for whatever context you’re drawing into. CGBitmapContext which some of you maybe using is similar to CGImage in that it’s measured in pixels. If you’re going to use CGBitmapContext you’re going to need to scale the context yourself by the scale of your final drawing destination, which you can do just keep in mind that you need to invalidate that cache when the destination scale factor changes. That’s something extra that you need to keep track of. Okay. Let’s talk about a few problems you may run into when you’re converting your application into High Resolution. My 2X images aren’t drawing. I think they’re there, but I don’t see them showing up. My images are still a little bit fuzzy. Well, first thing you should do is check out your build product to make sure they’re actually there. Maybe X code isn’t including them, some reasons they’re in the wrong spot. Keep in mind they can either be as a @2X ping or if you combine them to a Higher Resolution TIFF you need to look for the TIFF. There’s a tool in Quartz Debug helps you with this. If you Quartz Debug installed already you can use the tools menu and enable color 1X artwork. This is similar to an option that’s available in the iOS simulator where it will tint any image that is scaling up from 1X to 2X, to make them pop out at you. Now my image is actually drawing four times as big at speed scale twice somehow. Make sure you’re not scaling as twice inadvertently or on purpose when we’re already scaling it for you. Again, the best thing to do here is ignore the two X’s as much as possible and let AppKit handle it. That will make your code cleaner and make this better. Then this is another case where things that composite to point will really come back to bite you because depending on how your context is set up especially if you’re doing off screen bitmap drawing using composite to point can sometimes lead to image drawing way too big. Here’s another problem, let’s say my views out of line. I’m doing some manual layout in code. I’m setting some frame. I’m calculating frames and now all of a sudden they’re just a little bit off. I see maybe some pixel cracks. There’s this method INSView, centerScanRect, you pass a frame it gives you back another frame that’s a line on a good grid boundary. You’re going to want to use that instead of say rounding the frame values yourself or using [inaudible 00:39:08] or anything like that because it turns out @2X you can actually have something at 0.5X and still be on a good grid boundary because you’ve got a pixel there so it’s not going to be misaligned. Just be okay with a half points if you see them. I’m doing some custom drawing in the view and it looks a little bit high or low. Let’s say I’m drawing my own buttons. I’m drawing the background and I’m drawing the inside and maybe the inside seems off from where it used to be. There can be rounding differences between 1X and 2X because all the pixel sizes are doubled at 2X. That means that there are really no odd pixel values at 2X unless you’re doing something very custom. That can lead to some rounding differences. If you can mange that try to control the rounding and see if you can modify your 1X to match it. I’m doing some window layout and my windows in the wrong place now. If you’ve been doing High Resolution before you maybe scaling your window coordinance, but you don’t need to do that anymore. Window coordinance, just like everything else in this system, are on points. If I have a custom coordination layer it’s not sharp, it’s fuzzy. You need to set the content scale on layers that you manage yourself. If AppKit is hosting that layer you don’t need to worry about it. We’ll handle all the details for you, but if you’ve got your own CA layer you need to handle the content scale on that. There’s also a delegate method you can handle this easily for custom contents in the layer. If you got openGL content. OpenGL content by default for compatibility reasons it is not opted into 2X drawings. You need to do that. You need to opt into openGL drawing. Now I’m going to do another quick demo on some image drawing issues that you may run into and some solutions that you can use. Open our project up. This is another incredibly sexy application that draws three images helpfully. My designer has chosen to embed the scale factor right into the image so he can always tell what they are. We’re drawing this in three different ways. The one on the left obviously is a regular NSImage view. The one in the middle is an off screen bitmap and then the one on the right is a custom layer that I’m inserting. Right now we can see that all these images are at 1X. The first thing I need to do like we did before is grab my 2X image, just drag it in and rebuild. Now they’re always run on 2X. Well I’m done right? Well one thing that’s somewhat hard to demonstrate here is what happens if you’ve got one display at 1X and another display at 2X? What we’re not seeing here is if this display was running at 1X and I had a 2X display attached the off screen bitmap and the custom layered images would actually always be drawing the 2X image. They’d just be scaling it down and that’s not any better. We’re actually still not done here. Let’s look at how we’re drawing the off screen bitmap image. I can see there’s a cache image method that you call lock focus and then call compositeToPoint with another image. Then we’re drawing that image with compositeToPoint again. How can we make this better? Using the my demo file that we all have. I’m going to copy this and I’ll explain it once it’s at its much larger font size. Here is a much larger method that creates a bitmap image rep at the pixel size that’s scaled up with the scale factor that we’re drawing in to. Then it sets the size, the logical point size of that image rep to the point size, the logical size that we’re going to be drawing in. then we use an NSGraphics context, set the current context or draw it. Draw it using the draw and rect method and then were caching the image scale factor so that we know when to invalidate that cache. In drawRect we can get a unit size, use convert size to backing to see what in pixels how big that unit size becomes compared against my cached image scale. Then I can know if I need to re-cache that image. Then we’re going to draw that cached image in the normal spot. You may think well that’s all fine and dandy. That’s a lot more code than I used to have though. Is there anything better? It turns out there is. Hopefully in the number two slot in my demo script. I can take all of this code, copy it and replace it with something much smaller. There is a new method on NSImage in mountain lion for drawing NSImages using a block. My cache image method what we can do is create a new image with a logical size that we’re drawing into, the point size. Then using this block AppKit will invoke this block when it needs to redraw that image. It will cache the bitmap, cache the drawing contents, anytime we need to redraw it because of scale factor changes. We’ll use this block again. You see I’m just drawing my image, maybe I’m drawing a frame wreck just to do something more complicated. Again, my cache image method gets done smaller and we don’t have to do a lot of the caching and we get to 1X and 2X correct operation. We don’t even need to cache the scale factor. My drawRect method becomes a lot simpler and I use the same drawingRect method from before. As you can see I’m getting the same image with my red border that I put in. Okay. Now we’ve got one more image to deal with, that’s a custom layer. What’s the problem there? CALayers can primarily have CGImages as their contents and so what’s the problem here? Well if you’ve got a 1X monitor and 2X monitor how is it going to know which image to pick for this case because I’m not really passing any context information in. What’s it going to do? It’s going to pick the densest screen and use that. This is always going to pick the 2X image and get me the CGImage for that from the image. Well, sense 10.6 you can actually set the NSImage as the content of a CALayer. That can be your first step and then that could be it right there because AppKit will automatically switch back and forth, if you’ve got a NSImage in the layer it’ll know to switch out the actual pixels based on what scale factor it needs to draw. You could stop right there. If you’ve got something more complex than just a simple image there’s a couple of other things you can do. You can use a delegate method. Let’s replace this with … I’m going to make myself the delegate and then copy this. Again, this is going to be more complicated than you need to just for a simple image case, but if you’ve got more complicated drawing that you want to do over that image or something else this is a good way to do it. Then I have this delegate method should inherit content scale. I say yes we do want to inherit that content scale and that’s going then immediately go back and try to redraw the contents of that image, which is going to then call the other delegate method, draw layer into context. I’m going to make an NSGraphics context out of that. Again, if you’ve got something more complicated you could replace this section, but whatever you’re drawing into your layer will be here, but you’ll get automatically redrawn when that layer needs to be drawn at new scale factor. Okay. These are some simple problems you may have drawing images. For more information on all of this you can bug our frame works evangelist and we highly encourage you to check out the documentation. These are all new guides on High Resolution. I highly recommend you check these out. It’s a great read. It really keep you up at night. The Apple Developer Forums are there for any help that you may need from your fellow developers and us. Okay, a summary of what we’ve covered today. The new High Resolution display modes for your hardware. It corresponds to four pixels on screen for every point. What you need to do to adapt this you need to add 2X bitmaps to your apps for all of your 1X and you need to look at your icons and up rise them as well. You need to look at your code and see what you can clean up for 2X. It may not be a lot and user will definitely thank you. You need to make sure that you’re handling display changes, not just I launch a 2X I’m going to always be a 2X, but I need to handle going from 1X to 2X and then back again. Then please read the documentation. We worked on it very hard and it’s great. Thank you very much for coming. Have a great week.

Apple, Inc. AAPL
1 Infinite Loop Cupertino CA 95014 US