Optimize your iPhone App for the Retina Display 

Session 134 WWDC 2010

The iPhone 4 Retina display packs four times as many pixels into the same size screen as earlier iPhone models. All of these additional pixels mean your apps can now display incredibly crisp images and text. While iPhone OS has built-in support for high pixel density and takes care of most of the heavy lifting, there are several opportunities for you to optimize your applications for this new display. Learn the tips and tricks to ensure your application takes full advantage of these wonderful new display capabilities.

[ Applause ]

Andrew Platzer: Good afternoon.

Welcome to Optimizing Your iPhone App for the new Retina Display.

My name is Andrew Platzer, I work on iOS application frameworks, and I’ll be talking about the changes you need make to your application on the UIKit side in order to support the new display.

And then Richard Schreyer will be talking about the OpenGL side of things.

So hopefully you saw the talk, of course, the Keynote, and you saw the new display.

I guess you haven’t had much chance to actually see the display in real life, but it really looks good, it is just amazing.

And we do a lot of stuff for you, but there’s a lot of stuff you can do to make your application look much better.

With the new display, with the very high resolution, it gives you a chance to really refine your visuals.

We do some of that for you, we give you the high resolution text.

But you can provide high resolution images at the same time and make the app look like it was designed for the new phone.

It gives you a chance to make your application a bit more immersive, you know, people completely forget that it’s a phone.

They won’t see any pixels, all they do is concentrate your application.

And this is not just for games, but for any kind of application.

All they see is your content, your program.

Of course now you have to support two different devices with two different resolutions, which makes it a little bit of work, but not too much, hopefully, for you.

And of course there are compatible issues.

If you’re going write code for the new resolution, you’re going to make sure that it continues to run on old applications or old hardware, rather, so that it looks good on both.

So I’ll talk about the changes you need to make in UIKit.

And here are the topics I’m going to cover.

First I want to talk about how we’re going to implement compatibility so the application continues to run on both old and new hardware.

We’ll talk about how you can provide resolution images at the same time, still providing lower resolution images for older hardware.

Talk a bit about how the view system implements drawing in high resolution, what you can do when you’re actually drawing your view, to draw finer lines and so on, finer images.

I’ll talk a little bit about the underpinnings that Core Graphics, and Core Animation that UIKit sits on top of, and how they implement high resolution, and finally, how you can provide high resolution icons and launch images when the application is launching so it looks like your application belongs on the new hardware.

So first I want to talk about points not equal to pixels.

What this really means is that in the old days we would use these interchangeably.

We would say I’m drawing a 100 pixel-wide image in 100 point sized view.

And they’d be 1 to 1.

And on the desktop and so on you’re used to that too.

It was pretty much a given.

I mean, we didn’t make any distinction about it.

But now this is no longer true.

We’re now going use only points, but these points are device independent.

So your old application still uses the same values as it did before, but now they’re no longer a match for the pixels.

So the Retina display has a 2x screen, exactly one point now equals 2 pixels instead of 1 point equal 1 pixels.

There’s some advantages to having an exact 2x screen.

You don’t have to worry about fuzzy pixels.

So for example if you’re drawing a 1 point line you won’t get one-and-a-half pixels, and then you’ll have half a pixel blur, maybe even spanning 3 pixels, depending on the size of the screen and so on.

You don’t have to worry about pixel cracks.

That’s where you’re trying to put two images right next to each other.

But because one of the images doesn’t quite fit across the pixel boundary you’ll see a thin line.

And in fact on the device we have a special mode where when the information is when the image is sort of sent to the screen at 2x, we don’t do any interpolation.

So you get an exact copy of what you would see on an old device.

Just every single pixel on the old device now takes two pixels, 2×2 pixel grid on the new device.

So it looks identical.

But of course with the 2x resolution you have more memory and you’re moving more pixels around, more bits around.

So you have to be aware of that.

If you have a very graphics intensive application where you have tons and tons of images, be aware you’re going to have 4x as much storage for those images.

And if you’re drawing a lot you’re going to be impacting your performance.

Some of it helps with the newer hardware.

But you know, it may not be enough.

So here, for example, is a screen shot of Safari, taken from the simulator running in the old hardware, simulator, and running with the new iPhone 4 simulator.

I know you can’t see very much, let’s zoom up the first one.

So you can start to see that there’s a much better quality of not just the text but the graphics.

We can zoom in a little bit more, so you can see really how much better it looks.

You can see the text, not only is the text sharper, but things like the Wi-Fi indicator, much, much sharper.

The rounded corner on the text field, very nice and smooth.

And even things like for example in the web page itself, the word store looks much you could actually read it on the new display, it was just a blur on the old one.

So there’s a much better opportunity to present your information without having to zoom in on it, if possible.

And the user doesn’t have to sit there and zoom in all the time to see that.

So what’s the point.

Well, we could give it a fancy name like some other frameworks do.

Call it, like, device-independent measurement of pixel elements, or dimples.

But maybe you don’t want to say that, so we’ll just stick with point, so we’ll just use the word point because everybody’s used to it.

There’s no fixed size to it, we’re not going be pedantic, we’re not going to say it’s exactly 120 points to an inch or whatever.

I mean, each different display has a different resolution already.

Different than iPhone 3GS, for example, and the iPad.

So a good, you know, rule of thumb is you know what a point is when you see it.

10 point font is reasonably readable, you know, 45 40 point by 40 point area is something that is easy to tap on.

You don’t want to get any smaller than that if possible.

So as I said, we now work solely in points.

So everything everything in the UIKit is just points.

There’s no mention of pixels.

And so of course we start at the bottom, with UIScreen.

UIScreen has a new read only property that’s called scale.

And of course on old hardware it’s 1.0, and on new hardware it’s 2.0.

And so the points is in bounds.

So you’re no longer referring to the size of the pixels in the screen you’re referring to the point size.

And so it’s still 320×40, and of course the advantage for that is there’s no UI relayout.

You don’t have to suddenly multiply your window size or your view sizes or anything by two in order to fit on the new screen.

Everything just stays the same.

This also means that UITouches, the location and view, the point value, is in points.

But this means you might get fractional points, 10.5, for example.

So if you’re trying to too some exact integer comparison for a particular location, you’ve got to watch out for that, because of course, you might be half a point off.

And you’ll notice this for example in the simulator, especially where of course you can be exactly accurate on a half point boundary, a single pixel boundary.

And it means of course that there’s no UI layout, UIWindow.frames are also in points.

So 320×480 window is still full screen.

And wherever you position your window, if you have another window on top, for example, that doesn’t fill the screen it still remains in the same place on the new iPhone 4.

So as I said, we’re working with points only and not pixels.

UIImage of course works with pixels, it works in concert with Core Graphics.

We the UIImage.size property which is already there is as I said, everything else, in points.

This means we have to add a new property, just like we do in screen, called scale.

And this maps from the point size that we specify to the CG images pixel size.

And so the if you’re going to work with the pixels directly, be warned that this value has changed.

I’ve seen people actually use the image size as a convenient way, after getting the CGImage in order to work with the size.

Now you’ll be rudely surprised when you only get a quarter of your image, for example.

If you want to, you should use CGImageGetWidth, GetHeight in order to determine the actual pixel dimensions of the CGImage ref that’s attached to the UIImage.

So now you’ve got we can support 2x images using the scale property.

You want to be able to load this.

Well, we do a lot of it automatically for you.

All you need to do is create a new file with the same name as an existing file.

So for example, if you have Button.png just create a new file called [email protected].

Make it twice as big.

So if you’re got a 100×100 button, create a 200×200 pixel button and just drop it in the exact same place.

We’ll automatically load that.

We’ll insert that suffix before the extension, the @2x, which things like call to imageNamed and imageWithContentsOfFile, and return that scaled UIImage automatically.

So you don’t have to do anything in terms of code.

We’ll just return it.

Note that if you do when we do load that image in, for compatibility, we ignore the DPI.

We had some applications when we were developing this which actually provided us with 300 DPI images, which of course these appeared very, very tiny on the screen.

And since these images are intended to be used by your interface, not as sort of a general image editing program, we ignore the DPI completely.

So we only look at whether there’s an @2x extension to determine if we need to scale up.

Or if we need to set a 2x image scale.

In iOS 4 now we’ve added a few new methods and a few new behaviors.

One of them is that you no longer need to include the .png when you call image names.

So just say imageNamed button, for example.

That will save you having to remember to type that all the time and be confused as to when your image doesn’t load the first time.

We added a couple of class on instance creation methods that take both the scale and an orientation.

So now you can create a full UIImage using your own CGImage ref that has both the scale and orientation set.

And one case where we do work with the DPI resolution is when you ask for the NSData using UIImageJPEGRepresentation or PNGRepresentation, will actually take the scale, assume a sort of standard of 72 dots per inch as a scale of 1.0, and we’ll create a PNG that’s marked as 144 DPI for example, if you have a scale of 2.0 for that image set.

Through again, if you were to read it in using the standard calls we would ignore that DPI.

I’ll talk a bit how you can actually get that DPI a bit later.

If you wanted to draw an image off screen, we give you a function called UIGraphicsBeginImageContent.

And it took a single parameter which was the size, and this size was effectively in pixels.

Which meant there was an implicit scale of 1.0, and also that we automatically assumed that the image had some transparency in it.

So when we created the CGImage ref underneath, we would have an alpha info saying there was an alpha channel there.

We’ve extended this a little bit in the new iOS 4, we’ve added a new function with options called UIGraphicsBeginImageContextWithOptions.

This function takes the size as before, but as everything else in UIKit now, this is effectively in points and not pixels.

It takes a scale.

And the scale can be any value you want.

But we do have a special meaning for the scale value of 0.

This says use the same scale as the main screen.

So on the new phone, this value will implicitly be 2.0, on other older hardware, it will be 1.0.

So you can just pass on 0 and stay compatible no matter what kind of hardware you’re running on.

And we also provide you a simple Boolean opacity flag.

And this just tells us when we’re creating the CGImage ref underneath that we don’t have any alpha.

Now this unfortunately is not a space saving.

We still need some padding for efficiency.

It’s actually still faster to draw it, if we include padding for the alpha.

But it does let you gain some performance by telling us that this image is completely opaque, so we don’t need to do any blending when we draw the image.

And one good thing now also in iOS 4, and the reason that you can begin to really use this function everywhere, even on secondary threads, is we’ve made drawing thread safe in iOS 4.

This means that if you can use any combination of UIColor, UIFont, the NSString drawing extensions for actually drawing strings, UIImage drawing, and so on, UIBezierPath, you can use in any thread.

Every thread has its own context.

So if you call this function multiple threads you’ll get multiple contexts, and you can extract images from each of those contexts at any time.

You don’t have to worry about any threading issues there.

So this let’s you really you know, render stuff on a secondary thread.

You’re filling, you know, rendering the contents for a table cell, you know, as it fills in because it takes a lot of time to render.

You can do that in the background and then have your interface appear right away and slowly fill things in.

So now we’ve got some high resolution images which we can load or we can create using off screen buffers.

We’ll want to be able to draw.

We’ll also want to draw other stuff in high resolution directly when drawRect is called.

So UIView has a new parameter, one new one as well, also a scale.

Now this one’s named slightly differently a little bit more complex.

It’s called UIView.contentScaleFactor.

And the reason we call it that rather than just scale is to differentiate what it effects.

It does not effect anything but the content.

This on the CALayer is the content’s property.

Which is usually something like a CGImage ref or the contents that you generate when drawRect is called.

It does not effect the geometry or the scales or the transforms of the view or any subviews.

So a view of a view with some subviews, and you go from a 1.0 to a 2.0 contentScaleFactor, none of the subviews will change.

If you already scaled the UIView setting an affine transform, again, setting the consent scale factor, will not suddenly resize your view twice as big or half as big or whatever.

It just effects how the content of that view usually whatever it is, an image or such, gets positioned and scaled inside that view.

It always returns a nonzero value just so you can use it, divide by it safely without having to worry about a divide by 0 case.

And in general, it’s 1.0.

But there are a couple of cases where we do set it.

The first is in the drawRect.

So if you have UIView subclass that implements drawRect where it’s placed in a window that’s on the screen, that’s 2.0 scale, we will set the contentScaleFactor to 2.0.

And this kicks off a bunch of things that happen automatically to handle high resolution.

Also UIImageView, when you set an UIImage into that image view with the scale of 2.0 we tell the UIView that “Oh by of the way, you’ve got some content here.”

In this case, a fixed CGImage ref, or UIImage with a CGImage ref that has a larger scale than 1.0.

And in general, you know, you never need to set it.

I mean, you’re not setting content directly, really, you’re either setting it using UIImageViews or when we call you with the drawRecs.

So it’s something where it is writable property, but you probably never need to set it.

So with the drawRect call, most common case where you’re generating actual content for that particular view rather than assembling it using other subviews.

The rectangle we pass in, drawRect, is in points, just like everything else, as I’ve mentioned.

There’s a bitmap buffer underneath which is twice as big on the new iPhone 4.

So if you have 100×100 view, you’ll get a 200×200 pixel buffer.

Now what we do for that is that when we call your drawing called, we set up a CG context ref, which is you get from UIGraphicsGetCurrentContext.

And that will have a CTM, a current transform matrix set up to have that scaling.

So when you draw a 100×100 box, for example, it scaled up twice as big to be 200×200 pixel box, when it finally gets rendered into the buffer.

So this is all set up automatically for you when you set up for you when you set up the view contents scale factor.

We also set up some other internal parameters as necessary, so that text rendering when we generate the actual text bitmap is at high resolution.

And of course UIBezierPath which sits on top of CG bezierPath is drawn much smoother.

The aliasing, the line, as the curve goes, where it’s partially covered, much finer than it was before, so it looks much smoother.

You can use the UIViewContentScaleFactor to determine the size of a single pixel, so this is where, you know, it returns nonzero value, you can just use that.

And I’ll show you how those works.

So for example, let’s say you want to draw a single one pixel wide horizontal line using UIRectFill.

Pretty standard way of doing a horizontal line.

Currently, you would do something like you know, CGRectFill, CGRectMake, sorry, should be RectFill, sorry, CGRectMake 0.0, 0.0 width and then height of 1.0.

And of course on the new display, that will draw a two pixel high line.

So you want something a little bit finer.

All you need to do is get the content scale factor which will be 2 on a view on a new display.

And divide 1 divide that into 1.

So you get 0.5, in this case, on a new display, 1.0 on an old display.

And then when you fill to that height you’ll get a single pixel no matter what display you’re running on.

Similarly with the UIBezierPath, you know, you’ll do the same thing.

You’ll get beziePath.

You will set the path width to be 1 divided by the few content scale factor.

So you know how the 0, it’s going to be 1.0 or 0.5.

And then here, for example, we’ve done a little code where we’ve moved to (0,0), moved to the width, and we draw.

Of course if you used bezierPath before, you’ll have run across which when you stroke it, that it actually spans multiple pixels because the pen goes along that point.

So, you’ll just need to adjust the line a little bit.

Shift it down by half a pixel.

In this case, literally a quarter point.

Because one point equals half a half a point equals 1 pixel.

And then stroke between those two lines to get a nice thing, nice single pixel line.

And of course if you’re doing curves and such, you’re going to have, you know partial pixels along the way, and it will be very nice and smooth there too.

UIImage, before as I said, has a scale value.

The scale value is used whenever you call the various draw methods, drawAtPoint or drawOnRect.

DrawOnPoint uses the image size which is in points, to draw, to determine how big to draw.

And UIImageView as I mentioned before automatically sets the contentScaleFactor to the image’s scale factor.

This is not as important, it’s actually scaling to fill because it was automatically scale the contents of the image to fit no matter what size you make it.

But if you’re using one of the other content modes, let’s say center or top left positioning, then you need to tell us how big the actual image is, otherwise it will appear way too big.

So for example, if you had a set to center mode and you didn’t tell us the scale was 2, then you would have an image that was twice as big, and it would probably go past the edge of the bounds of the view.

Whereas it will be shrunk to the correct size when you have the image scale set to 2.0.

You should, if you’re going to provide multiple images to UIView, and that can be done either via setting a highlighted image and then setting the highlighted state on the UIImageView or providing an array of UIImages that you’re going to animate.

They all should be the same scale factor.

We don’t sort of on the fly adjust.

So if we’re playing a set of animations from a set of UIImages, we’re not going to suddenly go from 2 to 1 to 2 to 1 or whatever, as your image goes, so you’ve got to make sure that all the images you provide have all the same scale factor.

And that’s animate images.

And just one side note, because I have seen people do this.

Please don’t override drawRect on UIImageView.

It does confuse it, and sometimes you’ll get cases where it will draw low res at one point and then resolution later because the drawRect interferes with our handling of the content scale factor.

One other view that is effected by the scaling is the UI scroll view.

In order to get nice smooth scrolling as we go, we try to scroll by one pixel at a time.

So of course on a new display, on a Retina display, we’ll scale by half a point.

So if you’re rounding exactly to one point, be warned that, you know, it might appear to jump or you know, you should try not to avoid rounding if possible, just let us position it.

So we’ve got stuff drawing, it’s all nice, high resolution.

I just wanted to talk a bit about the underpinning of this Core Graphics and Core Animation.

Core Graphics deals only in pixels.

There’s nothing about resolutions, DPI or anything in Core Graphics.

You only have CGImageGetWidthHeight.

There’s no CGImage get set resolution or so on.

There’s no way to do that.

This means that if you do want a scaled image you have to store it somewhere else.

Which is what we do, of course, when we provide UIImages.

Pretty much got one ivar which is the scale value.

Some people, especially because if you’ve been drawing on a background thread, you want to create you create a CG context using CG bitmap context create.

Well you know, we really, really should just use UIGraphicsBeginImageContextWithOptions, because it gives you all this extra functionality, it gives you all this, you know, behavior that you don’t have to worry about.

And it does all this set up.

You know, if you’re going to use CGBitmapContextCreate you have to do all these steps in order to draw using UIKit on that.

You have to you probably have to flip it in order to match the UIKit’s direction for the coordinate system, you need to set the scale, you’ll need to push that context and then later on when you get the CGImageRef out of it, you’re going to have to put that scale somewhere else as well.

And there’s really no performance gain because we do a few little extra function calls, perhaps, but you know, when you’re moving 1,000 or 100,000 pixels and blending them and all that, that is the real amount of time that you spend.

So you know, especially now that UIGraphicsBeginImageContext is thread safe.

You know, there’s really no necessity to go all the way down to CGBitmapContext.

Core Animation sorry ImageIO, if you do want to read DPI just wanted to mention this other framework that’s available now, I believe starting with iPhone OS 3.2 called ImageIO.

And it’s the exact same framework that’s available in the desktop.

And that let’s you read and write images, get full information, do multiple images per file like TIFF and so on.

If you do want to read it, the DPI of the file, you can call this function called CGImageSourceCopyPropertiesAtIndex.

So for any particular image in a file you can get its DPI.

It’s just a property called kCGImagePropertyDPIWidth and Height.

And some formats allow different values for that.

Generally, of course, people assume 72 is, you know, 1x scale.

But you can do whatever.

And just as a note for those who load in PNGs because they use pixel per centimeter, sometimes you’ll get 71.9 or whatever.

So you’ll have to make sure you want to round that if it’s close to an exact value of something like 72.

And again, you can use that to set and if you’re doing more general image editing program, you’ll want to use something like this.

Because you know, it let’s you keep things like the x of data as well and so on.

Image UIImage is really meant for interface images and very simple images.

It’s not meant for a complex system.

With Core Animation, because UIKit sits on top of Core Animation, and UIView as an associated state layer, Core Animation has added its own property called contentScale.

Not quite the same as ours, but its equivalent to content Scale Factor when we set UIView, you set the contentScaleFactor, it gets sent down to the CALayer.

And as I said just before, it’s automatically set for UIViews that implement drawRect and UIViews that have an image set.

Because when we set the UIImageView we actually just set the layers contents with that image.

It’s rare you’re going to need to set this directly.

But if you’re going to work down at the CA level, the Core Animation level, you’re going to have to set this, for example, if you want to provide a mask layer, which is a way of masking out a section of the content of the existing layer.

And you probably want to match the resolution, that means you’re going to have to provide a high resolution mask, artwork for that mask, and set the contentScaleFactor for that as well.

And finally, I just want to talk about how you can provide high resolution, icons, and launch images so that your application looks like it’s native for the iPhone 4, that people don’t see blurry icons on the new hardware.

Normally, you just create an image that’s an a particular size.

So for an application icon you normally create a 57×57 sized image.

In this case, you’ll just create a 2x image, 114×114.

Now you can call it any name you want.

Normally, when you currently just provide a single image you create you select something in the info.plist called Icon File, and that’s a single string, that’s the name of the application icon.

So you’ll want to actually switch that to Icon Files.

That will replace the single string with an array of strings.

And you can just specify them in whatever name you want.

But you may as well use the @2x if you feel, you know, just to be consistent.

And it’s automatically chosen based on the size of the image.

So if you’ve got a high resolution display we will use 114×114 icon.

Otherwise, we will use the 57×57.

I was going to say here, all I did was just with a pop up change the type from icon file to icon files and give a list, Appicon.png, and [email protected].

Similarly, for document icons, you normally create a 22×29 sized icon.

So you’ll want to create twice as big image files, twice as big icon files with ofcourse finer quality images.

And just as before you’ll need to modify the info.plist.

Now you can enter this icon using Xcode in one of the panels.

I can’t remember which one, you can center that, but it only takes a single file.

So you’re going to have to go into the info.plist and edit yourself.

You’re going to have to replace CFBundleTypeIconFile, which is a single string with an array of strings, icon files.

And again we choose the correct one based on the size.

So here for example, we’ve got a dictionary, right now we’re listing one file type, one icon file, now we add the twice as high resolution, so we add the @.

you know [email protected].

And of course there’s also settings icons for those of you who provide settings or for searching.

In both cases we use 29×29 icons normally, so you provide 58×58.

Again, you have to add an entry into the info.plist.

Now these are chosen the same way that app icons are chosen.

So you just add them into the icon files list alone with your app icons.

And again you’re going to add in @2x versions of the same icon.

So you’ll have settings [email protected], and settings [email protected].

And we’ll just automatically choose the larger, higher resolution one when you’re running on new Retina display.

And finally launch images, of course when your application launches you want to provide images.

You’ll snapshot it using perhaps the simulator or new iPhone, when you get a chance to get one.

And you’ll edit it or whatever.

And you’ll put it into the info.plist.

Now normally you specify using launch images entry.

And it’s a base name, so we’re already appending modifying the name when we look for it.

So for example, if you launch in landscape, we’ll look for LaunchImage-Landscape.png.

You all you need to do now is just like before, add and @2x.

Now this really does need to the @2x because that’s how we can find the high resolution images.

So if for example, you might have [email protected] or [email protected], and we’ll choose the right one based on the orientation and the type of hardware, the Retina display or the older displays.

So I just wanted to summarize in a couple of points.

First sort of the bad news and the good news.

Of the don’ts for high resolution are don’t assume points are equal to pixels.

Especially when you’re working across directly with pixels using Core Graphics.

You got to just make that break and say we’re now working in points only.

This means that you can’t assume as a set explicitly that UIImage.size is the pixel size.

I have seen cases, for example, where you know, someone’s using UIGraphicsBegin CGBitmapContextCreate, and they pass in the image.size, before they start calling in CG calls.

And of course now your image is going to be clipped.

You you don’t want to assume that when you get the CGContext when you’re drawing it is exactly a 1.0 scale.

You know, if we created twice as many twice as wide a pixel buffer underneath.

So if you’re going to start reading the pixels from a CGBitmapContext, you’re going to start missing pixels if you’ve assumed the 1.0 scale.

Please don’t cram in a lot of information using a smaller font or a smaller images or whatever.

Yes, it’s amazing that, a 5 point font is actually quite legible on the new phone.

But for those of us with older eyes that’s going to be very difficult to read, regardless.

So just use it as a refinement, not as a way of squeezing in additional information.

And similarly, if you’re providing high resolution artwork, don’t embellish the 2x artwork, you know, don’t suddenly turn it purple when it’s in high resolution.

Because people might be using your app on older hardware at the same time or on their iPad and will be using the smaller artwork there, they want consistency.

So what do you need in the end to support high resolution.

Well first thing you do sorry is nothing.

We automatically give you high resolution texts, high resolution bezier path, all the system UI elements, the controls, nav bar, toolbars, standard icons for that.

All redone in high resolution.

So your application looks pretty good already, especially if you draw a lot of text.

If you’re going to do more high resolution work the first thing you want to do is actually no programming change at all.

Just create @2x images.

So go through all the images in your project and see how you can scale them up.

You know, if you’ve got lots and lots of images, just look at the ones that really will benefit from scaling up first, the ones with angled shapes or lots of curves and so on.

But you know, ideally you can do all of them, and everything will look really, really sharp on the new display.

Then if you’re going to start generating your own artwork using offscreen, please switch over to using UIGraphicsBeginImageContextWithOptions, pass in a zero scale so you’ll always get compatible output UI images.

An now everything will look really sharp, even the stuff that you’re generating.

Then finally, if you really want, you can override you can add to your drawRect call, get the pixel size, the one divided by view contentScale and start adjusting things there.

Even just drawing, so that will get you a really, really nice appearance throughout your whole application.

So that’s all you need for the UIKit side of things.

Now I’ll turn it over to Richard who will talk a bit about things you need to work on with OpenGL.

[ Applause ]

[ Multiple voices speaking ]

Richard Schreyer: Thank you.

So obviously we’re going to talk about OpenGL on the Retina display.

Some of you in here have probably done some level of OpenGL development.

Many of you probably haven’t.

Hopefully, I’m going to try to make this approachable to pretty much everybody.

So if you do find yourself doing some level of OpenGL development in the future you’ll have that little thing in the back of your head that says there was something I should have been aware of here.

So let’s dive right in.

You heard Andrew discuss Core Graphics.

It’s very much a pixel-based API.

And so if you work at the Core Graphics level you’re biting off a little bit of manual effort to make that work correctly.

You have to carry a scale factor along on your own.

You have to really think about the size and pixels as opposed to device independent points.

OpenGL is pretty much exactly the same in that regard.

It is very much a pixel-based API through and through.

So it does pretty much nothing for you on this front.

But fortunately, there’s still not that much that has to change.

And so just to remind you, I’ve put up on screen, to think the bounds times contentScaleFactor.

Because this is this is the algorithm that’s used pretty much universally to determine to determine how many pixels are in the image that backs your various views.

Whether you’re talking about image views, content generated to the drawRect, or content generated with OpenGL.

So I mean, what are the steps, how do you adopt high resolution displays on OpenGL.

How do you make the most of the Retina display.

Step one is to allocate a higher resolution color buffer.

It’s we want to fill more pixels.

That’s the advantage.

Step two is that we found a fair number of applications use various hard-coded sizes and dimensions.

That becomes a compatibility concern, and so we’ll have to fix that.

I’ll point out a couple of most common trouble areas.

And then finally, this is, you know, just as we want to load higher resolution images and icon files OpenGL is also very heavy on artwork and imagery that you provide.

And so you’ll also want to look at higher resolution content here.

So step one.

You already actually know entirely how to do this, at the control via the same contentScaleFactor property that controls everything else in UIKit.

As Andrew said, for pretty much all of the built in UIKit widgets, this is already set this is pretty much already set exactly the way you want.

It defaults to the scale of the screen.

So in iPhone 3GS you’re going your text view is going to have a scale of 1, and on an iPhone 4, your text view is going to have a scale of 2, and you’ll get the right view on any platform.

With OpenGL, the contentScaleFactor defaults to 1 everywhere, mostly for compatibility reasons.

So the first step to adopting high resolution is to set that scale factor yourself.

This is perhaps one of the only places where you will want to set a scale set a contentScaleFactor in an application.

In this in most cases, you know, the goal is to match the scale of the screen.

And so we’ll pull that out of UIScreen and copy that straight on through.

The second line here is when you actually allocate actually reallocate the OpenGL Renderbuffer.

That’s sort of our weird word for the image you’re going to be rendering into.

So at the time you call RenderBufferStorage allocated, Core Animation is going to snap shot the value of bounds, it’s going to snap shot the value of scale.

And that will become multiply them together, and that will become the image size in pixels that you’re going be making use of.

Given that OpenGL is a pixel-based API, it’s actually really handy to know how many pixels you’re actually drawing with.

So you can do the bounds times scale derivation yourself.

But usually it’s even easier and more foolproof to go ahead and ask what got allocated for me.

And as I said, if you’re just copying to the scale of the screen, this will be different depending on which device you’re running on.

So in this case, we’ll take these pixel width and pixel height, we’ll stash them aside, they’ll come in useful.

That brings us on to step two.

Fixing any hard coded sizes.

This is what really sets OpenGL apart from say UIKit drawing.

Where in UIKit everything is in terms of points, and so all of your layout functions unchanged.

And this is a great out of box experience.

With OpenGL, on the other hand, there are quite a few functions that explicitly take parameters in terms of pixels.

And even integer pixels.

So we couldn’t even really do anything really really crazy behind your back.

And so I put a bunch of the example probably some of the most commonly used functions that take pixel arguments up on the screen, and these are really where you have to make sure you’re not hard coading anything.

And the good news is if your application if you’ve already made a universal application for both iPhone and iPad, you’ve probably already found and fixed all of these cases, given that of the iPad also has a different sized screen.

But I want to point out just a couple of the most common trouble spots.

So OpenGL has the concept of a depth buffer.

It’s another image, but rather than storing a color per pixel it stores a distance per pixel.

This is how we handle figuring out what’s behind what, since we’re dealing with 3D scenes handling occlusion properly.

Given that this is a per pixel value, it has to you know, we have to allocate an equivalently matching amount of storage as compared to color buffers as well, if you have hard coded 320×480 into here, then that’s not going to match the dimensions of the actual color buffer that Core Animation has allocated on your behalf, and things aren’t going to work.

Your drawing will actually be no opted and errors start flying everywhere.

So in this case, easiest thing to do was grab our saved pixel within pixel height and pass them right on through.

If you’ve been copying our sample code or using our OpenGL template this is actually probably already working just fine out of the box.

The second case that impacts every application is a viewport.

A viewport is a function that sets what sub region of the view of your drawing is due at any given time.

It’s also most applications every application is required to set a viewport at least once.

And so if you’ve got exiting code you’ll find it in there somewhere.

Most applications just use a single full screen viewport, they want to fill the entire view.

So once again, we’ll just take our saved values and pass them right on through.

So it will be a common pattern.

So I’ll stop here in hard coded sizes, because this is really going to have to be something that’s a little bit more of a self directed job to fix, depending on exactly what’s going on in your application.

So that takes us to step three, where it gets really fun.

Now that you have an application that pretty much works at native resolution on the Retina display, now is when you really want to make the most of it and load high resolution content.

Just as you want to load better UIViews, better icon files and so forth, OpenGL is also really heavy on images.

And you have a great opportunity to show the user more detailed information.

Again, if your application is universal you may have already drawn a bunch of high resolution content.

And for 3D graphics, that content may already be perfectly applicable, so you might have an existing library of content that you can just leverage right out of the gate.

So for example, I mean, OpenGL often times uses a data structure like this.

We call it a mipmap.

It’s pretty much an image followed by another image that’s the same content, but half the width and height.

And recursively again and again and again until you reach 1×1 pixel.

GPUs love this kind of data structure for various reasons.

Usually the easiest way to handle this is to stick another base level on, right on the bottom.

In this case, it’s it ends up being very noninvasive, and you also have the option of looking at which device you’re running on, do I have a screen where the user is actually going to benefit from this.

And if so, leave it off.

Because otherwise it’s going to be it’s just going to be a waste of memory.

But if you just stick this new base level on, OpenGL will automatically you know, will automatically be aware of the size of your screen, the size of the textures, and will know when to use this more detail, this extra detail, whenever is appropriate.

Just one word of warning.

This is actually a slide you’ve already seen, but I want to reiterate.

There’s a lot of code out there that uses UIImage to load textures and UIImage .size returns its dimensions and points.

Well you can’t take those values and just pipe them into tech image 2D.

You’ll either have to make sure to scale it yourself, image size, image scale.

Or drop down a level and just straight up ask for the pixels with CGImageGetWidth and GetHeight.

This is probably going to be something that’s screws up a fair number of applications.

And then finally, there’s performance.

This is probably a much bigger deal for 3D content than it is for UIKit level content.

I mean, you really have to think about we’ve got four times as many pixels to fill.

And that’s usually the first question in determining performance is how many pixels are you filling.

So we found that there are some applications that you know, you flip the switch and performance isn’t really what you want it to be any more.

So we’ve actually I’m actually not going to talk about performance tuning in much detail, because that’s a gigantic topic.

We actually had yesterday an entire session on just this topic.

So if this is of interest to you, you’ll want to go back and catch the video.

We talk through both the methodology, where start when doing performance analysis, as well as describing some brand new developer tools that can really help you understand the behavior of an application.

I mean, so you’re going to want to look at, you know, your control over how expensive it is to draw each pixel.

In OpenGL you can run an arbitrary program for every pixel to control exactly what color comes out the back-end.

You have various controls over operations like alpha test, which can be particularly expensive.

Again, mipmaps.

You know, bitmaps I said GPUs love mipmaps.

It can make a huge difference in the efficiency of the caches, it can make a tremendous difference in overall application performance.

But really, if you’ve followed our advice in tuned and find yourself really not reaching your performance targets, 30 frames a second, maybe each 60, then again the line here to focus on is how many pixels are you drawing.

You know, X and Y have gotten a lot bigger.

And if you think about most applications, you’re putting, you know, you’re stacking a bunch of blended objects.

And those all add up too.

So it’s really X times Y times Z.

And that number can get really big.

But it’s really X times Y that we have control over.

And so if you find yourself with, you know, not reaching your performance goals, usually here our suggestion is is that you actually don’t go up to match the entire size of the display.

In this case, you can draw your game at 720 by 480 instead.

This is still a significant step up in quality over what you were drawing on iPhone 3GS.

And yet on the other hand, you’re only filling about half as many pixels as you would be had you gone all the way up to fill the display.

Certainly wouldn’t object to filling the whole display, but that’s just not always feasible, especially with really, really high end 3D content.

So I mean, how do you actually make this work.

You could just sort of allocate a 720×480 image and stick in the middle and have some black bars.

But not really.

That’s no good.

What you really want to do is you want to scale it.

You want to take your smaller image and you want to really, really good job scaling it up to fit the whole display.

So the user still gets a really immersive experience.

And fortunately, you don’t have to do that.

Core Animation can handle this job for you, and it can do a really, really fast job of it, and a really high quality job of it.

So again, I mean, how too you actually you know, if you find yourself wanting to reduce the pixel count in your applications for performance reasons, how do you actually do that?

Let’s turn right back to our good new friend the contentScaleFactor property.

And just take note that this is how games and other OpenGL content actually work out of the box.

Just a reminder, for compatibility reasons, we default the scale factor to 1 for OpenGL content.

And so this scaling mode is already how every single application has worked, every single GL application is working out of the box.

So right off the bat, our goal was to provide the same great performance you’ve always had and the same great visual quality you’ve always had.

Not a bad place to start from.

But if you think about it, as a 4x jump in the number of pixels.

That’s a pretty huge gulf.

And there’s going to be a class of applications where you have some performance head room to really step it up.

But maybe you can’t absorb a 4x jump.

And so there’s a couple middle grounds I want to talk about, where you can dial a knob, performance versus quality.

You can stick with one of the really interesting options is sticking with a contentScaleFactor of 1 and adopting anti-aliasing within OpenGL itself.

This is a new feature we’ve added in iOS 4.

And it can really make a really big visual visual improvement in visual quality.

Our anti-aliasing features will smooth out the edges of a polygon, it will smooth out lines.

But it doesn’t actually effect the images applied to polygons so much.

But it’s actually a pretty good deal overall.

What’s really advantageous about this, though, is that the performance impact is much, much less than if you had gone up and filled a 4x display.

And the performance impact is much less severe that if you had done so.

Even better, this is actually back portable to 3GS.

And finally, you also have more choices for content scale factor than just 1 or 2.

In fact, this is probably the only place in the entire API where you do want to consider a noninteger scale factor.

Think back to my original example where I said let’s render 720×480.

Cut the pixel count in half, double performance.

You know, in this case, getting that doubling of performance, to do that, I had to write one line of code, I had to set contentScaleFactor to 1.5.

This can be be a really big deal for a complex application.

That’s pretty much about the gist of it.

I mean, you really do have to be aware of points versus pixels, given that you’re using an explicitly pixel-based API.

You’re going to get touch input in terms of points and you might have to translate that yourself based on the views contenScaleFactor.

Adoption, there’s fix ups here and there that you may need, but the core of it is to set contentScaleFactor.

Get out, get if you care to, to get out of the compatibility default of 1.0.

Check your render buffer dimensions, make sure they match.

If you turn this on and the first thing you see is black, this is probably what you got wrong.

And secondly, similar thing with viewport.

If the first thing you see is the OpenGL content crammed up in the upper left corner, that’s what you probably missed there.

So that brings us to the end of discussion on the Retina display.

If you have any questions you can contact Bill Dudney directly.

We have a whole new chapter on this in the iPhone Application Programming Guide and there’s also actually already a whole bunch of questions being asked about this in the Apple Developer Forums.

There’s a What’s New in Cocoa Touch session happening tomorrow, which goes into the rest of what’s changing in UIKit.

And there’s also the aforementioned tuning OpenGL ES Tuning and Optimization session.

Which is obviously your first choice for OpenGL tuning, as opposed to simply dialing back on the screen resolution.

So with that, I hope this was really useful to you, and we look forward to seeing a lot of you at the labs.

Thank you.

Apple, Inc. AAPL
1 Infinite Loop Cupertino CA 95014 US