Working with Wide Color 

Session 712 WWDC 2016

Discover the fascinating details behind the display of color on Apple platforms. Go beyond the basics and learn how your apps can take advantage of a wider and deeper representation of color on iOS and macOS. Gain specific insights from the experts on optimizing your artwork asset pipeline and explore new API to develop more vivd, true to life experiences.

[ Music ]

[ Applause ]

Hello and welcome.

I hope that everybody’s having a great WWDC.

My name is Justin Stoyles, I’m part of our Graphics and Media Group here at Apple.

And I’m really excited to talk to you today about working with wide color.

We’re going to cover a few topics today.

I’m going to start with discussing some core color concepts.

And we’ll talk a little bit about what is wide color and why does it matter.

Then I’m going to hand it over to Patrick, who’s going to walk you over some of the processes of getting wide color assets into your application.

And then I’m going to hand it over to Steve, who’s going to talk about rendering wide color.

So let’s get started.

As I mentioned, I’m excited to talk to you about wide color today and the reason why is because we’re actually transforming the way that we handle wide color, and colors in general on the Mac and iOS platform.

A good place to start is in the ’90s.

So Apple has always been at the forefront of solving the color problem in computing.

And a good place to start with ColorSync.

So ColorSync is our color management platform on the Mac and it was developed in the ’90s and at the time, Apple and a number of other like-minded companies got together with the idea that standards in color management and color reproduction would begin to solve some of the problems faced by engineers and designers when working with color.

That group later became known as the International Color Consortium, or ICC, which you may be familiar with.

The work of ICC was then incorporated into ColorSync and we built that fundamentally into OS X.

Now Apple has also been at the forefront of display technology.

We now have these beautiful retina displays across our product line.

People really, really love them.

But what really is the next frontier?

Well, last year we announced the beautiful new iMacs with P3 displays.

And the feedback we got was really tremendous.

They display 25 percent more color.

And they have the P3 color space, which people really, really love.

And if you fast forward another year, we have these new iPad Pro 9.7 displays with True Tones.

So we’ve taken that even further.

Now, there’s a lot happening under the hood here, and we’re going to try to get into some of those topics today.

And I’ll start with Core Color Concepts.

Let’s start very, very, very simple.

So, what is a color space?

Well a color space is an environment in which colors can be compared and represented.

It can be a 1, 2, 3, or 4 dimensional space that is defined by the intensity of its color components.

Those color components are also often referred to as color channels.

One representation that you might be familiar with is RGB.

So, this is very relevant with displays, where your subpixels are red, green, and blue.

And to think about these color channels you can think that the intensity of each of these color channels defines the color that you see on the other end.

There’s many different types of color spaces.

I mentioned RGB spaces, which are very popular in displays.

But there’s also lots more, there’s gray spaces, which are used in monochrome printing.

In color printing we often use CMYK spaces.

And if you’re trying to do calculations and transformations, we generally use color spaces like LAB, which are device independent.

So now we have color spaces, and color channels, and we now want to create a coordinate system where we can compare and compute colors.

So there’s the concept of color primaries.

So color primaries generally fall at the most intense value that you can get with that particular color channel.

So in this example of an RGB color space, you would see that the color primaries are where we anchor 1.0 in our color space.

So for black, I would have no saturation in each of my color channels, so I have 0, 0, 0.

For white I would have 1, 1, 1.

And for red I would only saturate the red channel, and I would have 1, 0, 0.

Very simple.

And when we refer to color gamut, we’re actually talking about all of the colors that can be defined as a combination of those individual color channels.

So now you understand some of the basic color concepts that we’re going to be discussing in this presentation.

So then what is wide color?

Well first we should talk a little bit about what is the standard in the industry today.

So standard RGB, or sRGB is the most widely used color space in computing today.

It’s based on the BT.709 standard.

We use an approximated gamma of 2.2.

Typical lighting conditions.

Which in this case is D65.

And it’s the default color space for iOS.

And it’s actually the default color space for a lot of platforms.

Which is very convenient, because there are some platforms out there that are color managed, and there are some that are not.

And when a lot of the content that is in existence today is sRGB, you can make some assumptions on your incoming content and have it reproduce faithfully on the other end.

But that’s not always going to be case.

sRGB does a good job of describing colors that we work with on a daily basis, and our systems have been really good at displaying those colors to us.

But really there’s a lot of colors that don’t fit into sRGB.

A lot of textiles are designed using inks and dyes that have lots of colors that are outside of sRGB, mainly because these colors really catch our eye.

They’re really vivid and impressive.

If you’re watching soccer for example, a lot of the uniforms are actually outside of sRGB, because they’re striking, they get our attention.

Whether you’re watching your kid’s soccer game, or you’re watching the Eurocup, you’re going to see a lot these jerseys with colors that are actually not describable within sRGB.

A lot of products are also designed with colors that are outside of sRGB.

And these are products that we interact with on a daily basis.

But really the most compelling examples are in nature itself.

Sunsets, autumn leaves, tropical waters.

These are all things in nature that have colors that are outside of sRGB.

And we actually want to show those.

And the nice thing is you probably have a lot of images that you’ve taken on your camera, especially if you’re capturing in RAW, that actually contain a lot of this color data, but you’re not seeing it on your display if your display is only sRGB.

So what do we do about this?

Well, last year as I mentioned, we introduced products with a new color space.

And that color space is Display P3.

So in our iMacs, and in our new iPad Pro 9.7, we use this color space.

And it’s based on the SMPTE standard of DCI-P3.

But it’s a little bit different.

DCI-P3 is a color space that is defined for digital projection.

And it works really great for those viewing conditions.

But our viewing conditions are a little bit different.

And sRGB is really great at defining a standard that works with our viewing conditions.

So we adopted the same gamma, and typical lighting conditions for our white point, as sRGB.

These are the key differences between Display P3 and DCI-P3.

In order to understand what colors are actually outside of sRGB, but can be described in Display P3, I’ll try to show it to you in action.

So here’s an image from my photo library, and I exported it using photos in a Display P3 color space.

This is a really beautiful image, and even though we’re viewing it on an sRGB projector today, or if you’re viewing the video, you’re viewing it in sRGB today.

This image still looks really, really great.

But there’s a lot of things in this image that you might not notice right away, but there’s actually a lot of colors that aren’t actually describable in sRGB.

These are colors that are outside of the sRGB color gamut.

And in this case what you end up with is posterization in those regions, so those colors are flattened on the edge of the sRGB gamut.

So how do we fix this?

Well, we’re moving more professional workflows to our mobile platforms.

And in order to enable that, it’s not just putting a wide gamut display into your system.

There’s a number of other things that need to come along with it.

One of those things is we need to upgrade our factory calibration.

So, now all of our products have individually calibrated displays that from device to device you can count on those colors being accurate and consistent.

And then finally, we had to build full system-wide color management into iOS.

And that’s exactly what we’ve done.

But we can’t just build the same color management system with the same approach as we did on the Mac.

There’s different considerations and different restrictions when working with a mobile platform.

One of those considerations are the countless applications that already exist in the iOS ecosystem.

These are applications that are built in sRGB, tuned for color and performance in sRGB.

So how do we make sure that those applications can still run on our platforms that are now color managed, without any impact to performance, without any impact to your colors.

And what we did is we built on top of sRGB.

So we’ve introduced an extended range sRGB color space.

This is our working space for working with wide color.

We use the same sRGB primaries.

We use the same gamma 2.2.

The same white point.

But, the interesting difference here is that we actually allow values that are greater than one and less than zero, which allows us to not only keep the same performance and look of all of your applications that are built in sRGB, but using values that are negative and values that are greater than one allow us to express any color in the visual spectrum, while still keeping our anchor points in sRGB.

The best way to describe this is with a demonstration.

So let’s say I want to take the most saturated red in Display P3.

So that would be 1, 0, 0.

Now, if want to express that in extended range sRGB, it would look a little bit like this.

So in the red channel, I actually have a value that is greater than one.

And in the green and blue channels, I’m actually using negative values.

So I’m subtracting green and blue, and adding oversaturated red.

And what that allows me to do is get a color that is outside of the sRGB gamut, while using that same sRGB anchor points in extended range sRGB.

The interesting thing is if you’re using a pixel format that allows you to go very negative, and very positive, this approach allows you to express any color in the visual spectrum.

So this is scalable.

Speaking of pixel formats, sRGB is largely standardized in 8 bits.

So 8 bits is largely enough to describe colors in sRGB, not perfect, but great.

Now an easy example to describe the difference between the precision that we would want for sRGB, and the precision that we would want for wider gamut, is with this example.

So say I wanted to build a staircase in my home from the basement to my first floor.

I would use, say 8 stairs, I’d probably use 12, but let’s say for simplicity I would use 8.

This is the appropriate height for me.

Now, if I wanted to extend that staircase up to my second floor, I wouldn’t just use the same number of stairs.

And when we’re using digital encoding for color, it makes sense that if you’re trying to express more colors, you would want to us more precision.

So if we’re talking about using colors that are outside of sRGB, really our recommendation is to use 16 bits per color channel.

Now, before I hand it over to Patrick, I wanted to end with one more point.

So, we’ve gone through our entire system and upgraded all of our frameworks and tools to be color savvy.

But if you’re building your application with open source tools, or you’ve built your own image processing pipeline from scratch, you need to take some action to make sure that the tools that you’ve used are color savvy.

Otherwise, we’ve got you covered.

And for more on that, I’ll hand it over to Patrick.

[ Applause ]

Thank you Justin.

Hi. I’m Patrick Heynen.

I’m a Senior Engineering Manager in the Cocoa Frameworks Group.

And I’d like to ask where does wide color come from?

Well the answer is it turns out, it’s not just shirt.

No, it comes from you.

Namely, it comes from apps and their content, and the user experiences that you provide in your applications.

So what kind of content types are amenable for wide color?

Well the first one is the one you’re probably most familiar with and that’s static image resources.

These are the sort of the PNG, or JPEG, or individual graphic files that you may have bundled into your application and shipped along with your application bundle to your customers.

The next category is document and network-based image resources.

These are individual image content that you may either download off of a network service, or store in your document data.

There’s also further categories, like advanced media, which refers to things like live photos, or content acquire from the build-in iPhone cameras.

I’m not going to go into too much detail about wide color implications for those kinds of types, but I refer you to the Advances in iOS Photography and Editing Live Photos and RAW on iOS Sessions for more details about that.

And lastly, there’s also the concept of GPU textures.

If your program if your application is working at the GPU level, like your game engine, or your advance piece of graphic software, and you’re sending color information in the form of either textures or shader values to the GPU directly, there are other considerations as well for wide content.

I’m not going to go into much detail about those either, but I refer you to my colleague, Dan Omachi’s excellent treatment of this in What’s New in Metal Part 2.

Okay. So let’s frame the color problem.

Justin gave a great explanation earlier.

But how does it apply to apps?

Well app content can come from a broad range of sources and come in a broad range of color richness, all the way from grayscale, all the way through to even 16-bit wide color gamut content.

At the same time, devices and displays come in a broad range of color capabilities.

For example, in iPhone 5 it was only capable of representing sRGB, all the way through the latest iPad Pro which does a great job with full P3 colors and extended range sRGB.

So, how do you bridge the difference?

Well, we’re going to solve this color problem.

And the backbone really is color management.

So, what is color management?

The job of color management is to ensure that an image looks the same on any output device, no matter what color space it is encoded in or how it was originally created.

That’s pretty much it.

But how does it work?

Color management starts with every image or content type having an associated color space, sometimes referred to as color profile.

This informs the system what the colors actually mean.

And then the color matching process algorithmically maps those colors to the characteristics and output space of the device that you’re rendering to.

Of course, this is a computational process, and it is not for free.

In fact, every single pixel needs to be touched and converted and matched.

Also, it’s important to note this is a potentially lossy operation.

Especially if you’re going from a wider color gamut source down to a narrower one, say a 16-bit P3 content down to an sRGB 8-bit, there’s going to be some loss of color fidelity in that scenario.

So that’s something to be aware of.

So these are important aspects, but there’s good news.

The first good news is that the color matching operations, the way they’re algorithmically defined, I’m not going to go into the details of the color science behind them.

But it turns out that computationally, they’re very easily hardware accelerated, either by the CPU or by GPU.

Which leads to the next good news which is that we’ve built this functionality into the system itself, so that it all works automatically via the Quartz 2D, ColorSync on the Mac, and Core Animation.

In fact, all you really need to make sure of is that your content is tagged properly, and in that case, there’s no code required to display your image correctly, and have the colors appear correct.

So, platform color management.

Well, macOS, as Justin eluded to has been color managed since its inception.

Some might even say since before its inception, when it was previously known as macOS, during the ColorSync times.

So there’s nothing new there, but there is something new on the iOS fronts.

We have, since iOS 9.3, we now have automatic color management support on most devices.

So that’s color management, the backbone of solving the color problem.

But now let’s talk about how some design consideration and how the toolchain and the platform services have been enhanced to accommodate getting wide content into your applications.

First, as always it starts with design.

So, what is important to think about when you’re approaching the wide color from a design perspective?

Well, the first thing I would say is that it’s important to use wide gamut content where it makes sense.

It doesn’t mean everywhere, remember most of the colors that we interact with on a daily basis, are in fact contained within that nice sRGB triangle.

That’s where the bulk of the colors are.

There are however, these nice, new vivid saturated colors that may make sense for your application to use.

So really it’s important to think about wide gamut content as being a tool that you can use where vivid colors really enhance the user experience and add some value to your application.

There is no need to go and update all of your content and immediately upgrade it to P3.

This is not that kind of technology shift.

This is merely a new creative tool that you use where you want it, when you want it.

And the good news is we’ve enhanced the toolchain support to make this gradual opt-in of wide gamut possible.

So let’s say you do want to upgrade some content to wide color, what are some considerations?

It’s really important to be careful when you upgrade a design file to wide color.

A common pitfall is to just assign a new profile.

It starts out as sRB, assigning a new wider color profile.

This is a mistake, because this will just remap the existing color information into new color space.

It’s a pretty effect, but it’s probably not what you want.

Because it will just stretch all the colors out to be vivid and the appearance of the design file will be inevitably altered.

Instead, it’s important to use convert to P3 instead.

This will actually do a color match operation, your resulting design should not change its appearance, but it will be prepared to go and have some of its content boosted up into those nice, vivid saturated P3 colors.

When you’re working on wide gamut designs, it’s important to use, and we recommend strongly that you use the Display P3 color profile as your working document profile.

Also to maintain maximum color precision and highest quality it’s a good idea to work in 16-bit per channel color mode.

And of course, it’s nice to be able to see your designs.

So, if you are going to be working with wide color it’s good to do that design work on a system capable of rendering wide gamut color such as the late 2015 iMac, or other capable hardware.

When it comes time for production, for exporting delivering assets, it’s important to use 16-bit PNG files with an embedded Display P3 ICC profile.

This is sort of the gold standard of content interchange for wide color content.

Now, a brief note.

There are some very popular content production workflows out there.

They come under different names.

I’m going to talk specifically about the Adobe Photoshop Workflow.

They come under the names of Save for Web and Export Assets.

Many of these workflows have not made the transition to wide color yet, and they are not compatible with exporting 16-bit P3 content.

So, stay away from them for now.

And instead use the workaround of using save as PNG format with 16-bit and embedded display profile as a workaround.

Okay, so that’s the design process and how wide color impacts that.

Now, let’s talk about tools and how they can be used up to incorporate the content you’ve produced out of the design process.

Well the first thing I’m going to talk about and really the epicenter of the content tool story is Xcode asset catalogs.

Now you may be familiar with this.

This is the general tool in Xcode to allow you to organize and catalog your image resources, mark it up and provide metadata, and deliver it inside your applications.

What do asset catalogs do for you?

Well, they’re the best deployment vehicle for static assets.

We automatically color correct your source content and optimize it for the target that you’re building for.

So even if, maybe your designers are prefect and always can deliver their content with the right profile and all the right information in their deliverables, but that may not always be the case.

That’s where this automatic color correction comes in handy to make sure that we match and normalize it all into the appropriate working space for the target device.

Also, we do automatic pixel format optimization, making sure that the right bit depth ends up in the right device.

And, last, but definitely not least, asset catalogs are the gateway to app slicing which is an important component of the thinning feature that assures that only the content that’s relevant gets delivered to your customers on a particular device and none of the content that doesn’t.

So, what have we done in asset catalogs to make it easier to work with wide color content?

Well the first important enhancement is we now support 16 bit source content and provide a clean end-to-end path for 16 bit image resources.

This, we store it in 16 bit half load, but it is available for you in Xcode 8.

The next important thing we’ve added is the ability to catalog a display gamut.

What does this mean?

This means that you really introduced a new option in the inspector to provide and tailor optimized assets for either the sRGB or Display P3, display gamuts matching devices with those characteristics.

Okay so that’s the feature.

That’s what Xcode has to offer for organizing assets.

But how do you use it?

How do you think about using it?

Well, there’s really three easy choices here.

Choice number one, do nothing.

This may seem radical, but it is in fact a perfectly valid thing to do.

Because if you make no changes to your asset catalog, you may not need any of these new vivid colors, we will continue to render your 8 bit sRGB content faithfully on all devices.

So nothing will change.

You don’t need to go update everything just to make sure it works on this new hardware.

So this is a perfectly valid option.

Of course, this does mean your app will not contain any wide colors.

That’s a perfectly fine design choice.

That’s the outcome of this deployment choice.

So let’s say you do want to use some wide color in your application.

Brings us to choice number two.

Upgrade to P3.

So, the way this works is something we like to call Universal P3 Asset.

So, all it involves is taking a 16 bit Display P3 file that you may have gotten from your designer, and just replace your existing asset in the asset catalog with this upgraded asset.

What happens at build time is we’re going to automatically generate an sRGB derivative from that 16 bit Universal Display P3 master.

We’re going to do a high-quality color match, and a high quality dither down to that 8 bit.

And then of course at thinning and content selection, we’re make sure the right content variant gets selected on the appropriate device.

But let’s say you’re not really happy with this automatic conversion and you want full control.

Well the good news is we have that for you as well.

And that’s choice number three, optimize assets.

It’s a very simple option.

It’s basically, you provide a 16 bit Display P3 content and your original 8 bit sRGB content.

We provide places for you to organize both of those in the asset catalog.

And they’ll get built into your app and selected and thinned appropriately.

Okay, that’s wide color assets and asset catalogs.

I’d like to now give a demonstration of creating content.

Okay, so let’s say I’ve got this beautiful hue wheel here.

Let’s say I have this beautiful hue wheel here.

There we go.

That’s much better.

And I really decided this really could in fact benefit from some more vivid saturated colors.

How is this done?

Let’s take a look at this here.

This is just a simple gradient here, a radial gradient.

And I just want to call out here this point in the gradient is a pure saturated green.

Okay, that’s what I’m starting with.

So what do I do to upgrade it?

Well the first thing I’m going to do because I’m an organized type, is I’m going to create a copy of this and not just destroy the existing asset, I’m going to name it something new here.

And, okay now I’m ready to work.

So the first thing I’m going to do is I’m going to update this to 16 bits per channel.

Now, we’re ready to work in 16 bit.

And now the all-important convert to profile.

And what I’m going to do is I’m going to change this to the Display P3 color profile.

Okay, now we’re ready to go.

In fact, I can confirm with document profile down here, Display P3, 16 bits per component.

I’m good to go.

Okay, so now let’s take a look at what happened to that gradient.

And look at our favorite green here.

Okay, so that’s interesting right?

That green it didn’t change.

Notice that the hue wheel didn’t change its appearance at all.

That was actually the intended goal of converting, as opposed to assigning.

But I’m left with the same green now, being only 70 percent saturated.

Well this really indicates just how much head room there really is to punch out into that wider gamut and deploy a more purer green.

So, well let’s say I want to do that, but I don’t want to be on stage all day, I’m just going to reapply the gradient preset.

Now everything should be back to what it was before, 100 percent saturated, but now in that pure P3 wide color space.

So there we are.

Now I have my beautiful asset, I’m going to go ahead and save.

And of course this is just a design file, I can’t use it in my app until I save it.

So I’m going to do that.

I’m going to save it as, whoops not as a [inaudible] but as PNG.

I’m going to make sure to embed the color profile.

And save it.

And viola I’m done.

Well, actually I’m not done, I’ve just finished the design process.

I haven’t incorporated it into my app.

So, let’s go do that.

Where’s my app?

There’s my app.

So here’s my app, unmodified.

Here’s my existing sRGB P3 hue wheel.

Well, what I want to do here is I want to go over here to the pop-up here to expose the sRGB and Display P3 gamut.

This will immediately reveal a couple of additional cataloging options.

And I take my P3 hue wheel and just drop it right into that slot.

And I’m ready to go.

Now if I build in run, it will compile both of these into my asset catalog.

And if I was running on an iPad Pro 9.7 inch, I would be getting the 16-bit Display P3 asset, whereas on an sRGB device like an iPhone 6, I would be getting this asset.

Okay. That’s incorporating content into your application.

Okay so we talked about tools and how they work with live content.

Now it’s time to talk about some of the deployment considerations and what happens after you’ve used that tool and how it might impact the running of your application.

So, with asset catalog deployment, app slicing is going to ensure that the appropriate variant is delivered to a given device.

And this is really important because this is potentially a lot more content now that’s part of your application.

And, with that slicing, we ensure that there’s no payload cost for your actual end-user customers for adding this wide gamut content into your application, because we make sure that with app slicing, wide gamut content only goes to wide gamut devices, sRGB content goes to all the rest.

And it doesn’t waste space on the devices that it’s not relevant for.

On the Mac, there’s actually nothing new here, NSImage has always been able to select the best representation from available representations in the asset catalog.

It’s going to continue to do so based on the characteristics of your target display.

So if you are on a wide gamut iMac for example and you have P3 content available in your asset catalog, it’s going to go ahead and select that when rendering its content.

Also, just like the behavior with 1X and 2X assets on the Mac, NSImage and NSImage View and all the related app kit classes make sure to refresh the content automatically when the display characteristics change.

Such as when your window goes before the internal and the external display, or the color characteristics, or back end scale factor changes.

So that’s great.

Okay, but how is this data actually stored and what impact can that have on your application?

Well the good news is that at build time we tried really hard to optimize the pixel formats and storage characteristics of all your image content in the asset catalog.

And do so in as efficient a way as possible, to the best of our ability anyways.

We do now use 16 bit per component storage for wide color content.

I mentioned this before.

This really allows you to have end-to-end 16 bit color precision in your application resources.

And, we also have compression.

Now this is quite handy because more data, 16 bit versus 8 bit.

More information is inevitably going to lead to large footprint unless you apply compression.

Now, we’ve always had compression, we’ve always had lossless compression.

Akin to what happens with say PNG files.

But what’s new this year is some lossy compression options to help counteract, you know app footprint size.

The first one of these is what we call basic compression.

Now this is a compression system that’s almost exactly like JPEG, except the added bonus that it actually handles transparency and alpha as well.

It has a similar visual characteristics and performance characteristics.

And the great news is it really performs really well, on almost all, on all devices really.

Just like JPEG does.

So it’s something you can count on to give a very slight reduction in visual quality to get a really excellent storage characteristics.

Now, new this year as well, we have this exciting option called GPU compression using ASTC.

So this stands for Advanced Scalable Texture Compression, which is a GPU compressed textured pixel format, which many modern GPUs on Apple devices as well as other systems support.

And we have brought this option to you in asset catalogs in two different forms.

One is GPU best quality, which is a constant bit rate, 4 bits per pixel, ASTC compression mode.

Which is a great choice, roughly analogous in terms of visual performance and visual fidelity to high quality JPEG.

And then we also have GPU smallest size where if you’re really wanting to optimize your footprint and your memory footprint, greatly then you choose this size and that’s a 1 bit per pixel constant bit rate codec which has excellent storage characteristics.

Of course, since not every GPU on every device that is supported today can use the ASTC format, we take an extra step for you and automatically generate a software fallback for devices that lack that capability.

What’s great about this is it means you don’t have to worry about having incompatibilities of your content for all the supported devices, we’re going to automatically generate that fallback, and use it and thin it, and route it appropriately to those devices that cannot support ASTC.

So you can use GPU compression without any real risk of compatibility breakage.

A brief note about how GPU compression interacts with wide color assets specifically.

So, we use the ASTC Low Dynamic Range, or LDR compression mode, which really means that wide content actually needs to be reduced to 8 bits before compression, because LDR is an 8 bits per sample compression format.

The good news is we perform this work for you.

We automatically do perform a high quality dither down to 8 bit at build time when we’re processing your images.

But we preserve the wide gamut colors that may be contained within your original 16 bit source content by encoding and compressing in the Display P3 color space, thus preserving the ability to represent all of the colors within the Display P3 gamut.

Okay, so that’s the deployment.

A little talk about deployment characteristic, but this talk was supposedly about colors.

Well what about colors?

Specifically colors in UI.

An important observation is that most of the pixels you see on screen drawn by most applications, don’t come from images, even though they tend to get top billing in this kind of talk.

Most pixels on screen are actually solid colors drawn by your code in your application.

And wide gamut colors it turns out, can present new challenges in just working at that simple level.

So let’s talk about that.

In particular, the first challenge I’d like to talk about is actually talking about colors, because this is an under-appreciated problem, under-appreciate problem.

Usually when designers and engineers communicate in written form, or perhaps even verbal form, or visual form code is usually communicated with an assumed sRGB color space.

That means you’re probably used to seeing colors written down as something like this, RGB 128, 45, 56.

You know very simple.

They don’t tell you what color space they’re in.

And it’s just assumed everybody knows what color that is, because everybody’s using sRGB, aren’t they?

Well, not anymore.

This is no longer sufficient for working with wide gamut colors?

So what do you do?

Well the most important step you can take is be specific about what color space you’re working in when you communicate, or write down, or, you know, transmit that color.

Use Display P3, instead of sRGB when you’re working on wide gamut designs.

And indicate that as such.

And if you need more precision than 0 to 255 8-bit representation can give you then go ahead and use floating point.

So as an example, next time you’re sending colors in email, maybe you’ll refer to them as something like this, with an annotation, P3 255, 128, 191, etcetera.

Okay that’s how you communicate a color, but where did that color come from in the first place.

You probably picked it, the designer probably picked it somewhere.

How do they pick it?

Using a color panel.

This is the standard color panel that gets shipped with the Mac, it’s part of AppKits, known as NSColorPanel.

This of course is a very familiar UI, but it also suffers from some of the limitations we just talked about in calling colors.

Typically, you pick red, green and blue values, 0 to 55 numbers.

The color panel is always supported selecting in different color spaces, but that hasn’t always been a very obvious or easy user experience.

So I’m excited to say that we’ve actually made some enhancements in the color panel, in macOS Sierra to make working with wide color a little easier.

The first thing we’ve done is we’ve put the most common and important working spaces, mainly Display P3 and sRGB, right up there in a context in the action menu for the number picker, sRGB, and of course Display P3.

The next thing we’ve done is we’ve actually allowed an option to change the numerical representation of the colors from integer to floating point.

So that you can standardize on floating point colors if that works for your workflow.

Another exciting thing we’ve done is in the hue wheel, or wheel picker, we’ve actually changed its implementation so that it can actually render the full gamut of P3 colors when placed on an appropriate display.

And we’ve added a new context menu that allows you to either use this automatic behavior, where it just basically switches its capability between P3 and sRGB as you cross displays, or pin it to a specific color space, if that’s more what you’d like to see.

Okay, now we’ve picked our colors, we know how to communicate them.

Well, to actually make anything happen we have to code them don’t we?

How do we construct wide gamut colors in code?

We’ve introduced two new convenience constructors in AppKit and UIKit to take Display P3 color numbers directly.

This is NSColor Display P3 red, green, blue, alpha, and UIColor Display P3 red, green, blue, alpha.

Great way to work with P3 colors from your designer.

But of course your code may actually be working with extended range sRGB colors that maybe didn’t come from a designer, but maybe came from a different subsystem, or an API that might have vended, you know extended range sRGB to colors.

How do you work with those?

Well the good news is we’ve extended the existing standard red, green, blue alpha constructors for NSColor and UIColor color to not clamp their input components and allow values to be expressed greater than one and less than zero.

So now you can construct extended range sRGB colors very easily.

Okay, now storing colors.

What if you actually have to like put colors into your document data, or archive them in some form?

Well special care really needs to be taken when doing that, because like many other parts of the ecosystem, there has traditionally been a lot of assumptions of assumed color spaces and assuming everybody’s talking about the same thing when you pass a red, green and blue number.

All you should ever need is three numbers, right?

That’s all you need to represent a color.

What could be wrong with that?

Well it turns out there is.

Because you may not be able to disambiguate between an sRGB and a P3 color, and you may make the wrong choice and end up with the wrong color in your document.

So I’d like you to consider, perhaps, encoding a compatible sRGB color alongside the new wide gamut color, which allows your application, especially older versions of your application, because since this is document data, you have forward and backward compatibility to consider.

Allows the older applications to continue pulling that sRGB data and treat it as if it was sRGB, which is the assumption it’s making.

Whereas the newer software can know about this new wide gamut code that you’ve stored alongside it.

How do you create that compatible sRGB color?

On iOS you can use the CGColor.convert API.

And on the macOS you can use the venerable NSColor.usingColorSpace API to convert those colors to the sRGB color space.

We actually ran into this in the system itself in macOS.

You may be familiar with TextEdit.

And the text editor app on the Mac.

And its document format is RTF, or rich text format, which has been around and is supported by the Cocoa Text System as its needed document storage format.

And it turns out that when you actually applied a color to say, a range of text, the way it would actually store that in the RTF document was in fact as just simple red, green, blue, 0 to 55 values.

Which is kind of a problem, I think you’ll understand now, because we have no idea which color space those red, green, blue values are in, and it doesn’t allow you to express the full range.

So, we had to take action.

What we ended up doing is revving the RTF spec and how we write and read RTF on our platform at least.

By including an expanded color table which actually annotates each of the color tuples, which have now been turned into 16 bit integers with individual color space information, allowing you to specify colors in multiple color spaces.

Okay. That’s great.

But this is California, we’re supposed to be surfing aren’t we?

Well, what does that have to do with colors?

Well, there’s colors on the web.

The good news with wide gamut colors and the web is that as long as your image content is tagged appropriately and has appropriate color profile information, all of the content is color matched when it’s rendered on webpages.

So as long as you do that you’re good to go.

Also, there are now medial queries available to resolve assets between P3 and sRGB capable devices, which is really handy.

And lastly, there’s a WebKit proposal out right now for specifying CSS colors in color spaces other than the assumed sRGB, which has been the case since CSS’s inception.

And with that I would like to hand it over to Steve Holt, who will tell you a little bit more about rending with wide color in your apps.

Thank you.

[ Applause ]

Thank you Patrick.

So, in your applications you often have your assets that you get from the asset catalog, but also you might get some content generated by your users or pulled down from the internet from sources you don’t control.

And maybe you want to generate some additional assets as part of that.

How do you deal with this with wide color?

Well, when you’re drawing with wide color, I’m going to take a simplified example here.

Now, this is a box where one-half of the box is rendered with the most saturated Display P3 red we can have.

And the other half of the box is rendered with the most saturated sRGB color that we can have.

Unfortunately, because these videos are recorded, and you’re watching it live on a projector system that doesn’t have the same color fidelity as our new displays have, we have to cheat.

So, we’ve actually de-saturated both colors here.

So what you’re seeing isn’t exactly what this code that I’m going to show you will output, but render it on a device and it will be there.

So, I’m a Frameworks Engineer on the Cocoa, or on the UIKit Team, so of course I’m going to start with Cocoa on the desktop.

When you’re on the desktop, it already has some great color management support for color profiles in your images and in your drawing.

If you have to do manual drawing in your code, and it needs to be done off screen, the recommended way to do this is using the NSImage with drawinghandler API.

It’s called with the current context.

And you can use this anywhere you use any other NSImage, it will just work there.

So, let’s look at some code.

So, pretty simple, just set up a size and we initialize our NSImage with the size we want.

We don’t want this to be flipped.

And we configure our drawing handler.

We get the Rect from the drawRect that gets passed in.

We divide it up in halves, allocated displayP3Red and draw it.

Then allocate our sRGB red and then draw that.

And then return true, because we true successfully.

And we’re done.

There you go.

It all just works.

If you’re using this in your application today, you’re already doing the right thing.

So, that’s pretty much the desktop.

What about iOS?

Should be just as simple right?

So you must all be familiar with UIGraphicsBeginImageContext.

It’s a nice little API.

And, let’s see the code again.

We get the size, we made the context with the size.

We divide our Rects, draw our P3 color, draw our sRGB color.

We get our image and then end the context.

And this is what we get out.

That’s not correct.

Why didn’t this work?

Well, if we look at the documentation we have this fun little bit.

We’ve documented that BeginImageContext API is in the UIKit will only every return you a 32-bit integer context.

That means you only get 8 bits per every channel per red, green, blue, and alpha.

This has a pretty big impact on what we can draw with this.

Since we can’t create contexts that are larger than 8 bits per color channel, it can’t represent any colors in the extended sRGB color gamut.

And, because of the existing interface written in C, well we can’t expand the options.

And we didn’t want to make UIGraphics be an image context with more options.

So, we have some new API for you in iOS X.

It’s called UIGraphicsImageRenderer.

And the way this works is really straightforward.

So I’m just going to jump into some code with this.

Here we are.

First we instantiate our image renderer object.

You can reuse this.

And it retains the same properties across every implication of the image you get out with the drawing handler.

And you just call renderer image.

You supply your block, and you just draw as you would otherwise.

The bounds you can get off of the format which describes the exact properties that this particular renderer has.

And then you just divide the bounds.

You get and render the P3 red square.

And then you configure and render the sRGB red square and just like we had before, in AppKit, we have a correctly rendered extended range half P3, half sRGB red square.

So just a few other notes about this new API.

It is fully color managed by default.

And it also supports the extended range sRGB color space by default.

And it’s pretty smart about this.

If you have a device that has a P3 display, like the new iPad Pro 9.7 inch, we’ll just turn it on by default.

You don’t need to do any additional work.

But if you’re on a device that does not have the extended color display, like every other iPad and iPhone, then you’re going to get the more standard 8 bits per channel, sRGB context to draw into.

So we don’t waste the additional memory for having the 16 bit float context.

The other thing this does for you is it manages the CGContext lifetime.

You don’t need to worry about ending the context or any other management yourself.

And for any legacy code that you have in your application, it works with UIGraphicsGetCurrentContext.

So if you call into some function like a draw Rect, you’ll do the right thing.

So, that’s rendering off the screen.

What about to the screen?

Okay we’ll start a new iKit again.

On [inaudible] view we have the new Swift 3, well, newly named in Swift 3, Draw.

And we do the right thing for you here.

When you call draw in your UI view subclasses, if you are on an iPad 9.7 inch, we will call your draw method in the correct, extended range sRGB color space with the floating point pixel format.

And if you’re not on one of the new devices, then, well you get what you get now, 8 bits per channel sRGB.

And of course if you have images, we have UIImageView, which has been color managed since iOS 9.3 and continues to be color managed today.

If you need to know how you’re rendering, you can look at a new trait on UITraitCollection in your views in view controllers called display gamut.

It takes a UI display gamut enum and its properties are P3 if you’re on a new, P3 class display or sRGB if you’re on a new sRGB class display.

And this can be very useful in the case where you need to match a particular UI color with an asset pulled from your asset catalog.

So you can properly match colors to your assets.

Now, what if you know that a view is never going to need the extended range sRGB context?

Well, there’s a new property on CALayer and that is contents format.

And this controls exactly how deep the context that CA creates for you when it renders that view will be.

By default, on the iPad Pro 9.7 inch, it is to use the extended range context.

And on all of the devices the default is to use the traditional, sRGB context.

And you control it with any one of these various format literals.

So on Cocoa, back on the desktop, NSView, same thing as UIView.

It will draw with the current context that NSWindow has.

And you can look at any of the traditional properties there with the Window Backing Store, the screen target profile, etcetera.

If the window changes displays, then you get the view did change back in properties callback on your view.

And you can also listen to the NSWindowDidChange BackingPropertiesNotification, which is omitted from your view’s window.

These are the same that you’ve been working with since a very long time now.

And just as a refresher.

The backing properties on the NSWindow include the display scale, color space and the output display gamut.

And just like in iOS, you can also control how deep of a context you get in AppKit, with the WindowDepth property on NSWindow, and you can set it to any one of these depth properties.

So if you are on a wide gamut display but don’t need that extra precision you can set this to be 24 bit RGB, and it will not create the extended context for you.

So, what did we learn today?

Well, you’ve seen our vision for wide color and how to bring that to the next generation of displays and your users.

We’ve talked a little about color gamuts and color management, and how to work with that wide gamut content that you want to bring to your users.

We’ve reviewed how to use colors in your application.

And how to take your drawing code to the next level of color by making sure that it draws properly in these wide gamut scenarios.

Now for more information you can of course check out the slides on the session’s website.

And there are a large number of related sessions.

This is page 1.

So please check them out.

If they’ve already passed, I’m not going to enumerate them, there are a lot.

See the videos through the WWDC app or online at the WWDC website.

Thank you.

Apple, Inc. AAPL
1 Infinite Loop Cupertino CA 95014 US