[ Music ]
[ Applause ]
Hello and welcome.
My name is Kushal Dalmia, and I and my colleague, Terry Long, I'm going to be representing optimizing I/O for performance and battery life.
In this talk, we're going to take a look at what an I/O is, how it affects your app, and how you can improve your app's performance by improving its I/O performance.
So let's begin.
As we all know, devices are getting bigger and better every year.
Screen resolutions have gone up by as much as 16 times in the past decade.
Similar improvements in [inaudible] technologies allow us to capture 4K HD videos and amazing high-quality images from our mobile devices.
All these improvements have led to richer media being produced and consumed every day.
Just to put it in perspective, let's take a look at the trend of the iPhone wallpaper size.
If you look at the size of the iPhone wallpaper across device and generations, you notice that the growth has been exponential.
The size of the wallpaper on an iPhone 6s Plus is as much as 14 times its counterpart on the iPhone 3G.
And there's a similar trend in all of the phones' data as well.
We build and use complex apps for gaming, messaging, and social networks.
We work and store richer documents like PDF's.
And we all share and capture high-quality audio and video files.
Now to manage this data explosion, apps need to be really efficient in their system resource usage, and the main system resources are CPU, memory, and I/O.
For CPU and memory, I'll refer you to last year's WWDC talk, Performance on iOS and watchOS, and today we're going to talk about I/O.
I/O, or input/output, are operations that interact with the local file storage or network-based servers.
Operations that interact with the file system and deal with reading or writing files are generally considered an I/O.
Talking to a web server is a good example of network-based I/O.
Now one of the reasons I/O's are so interesting is that there is a huge variation in the I/O technologies and the performance characteristics.
Consider the latency to do a one megabyte write to some of the most common I/O medium such as the SSD, a hard disk, and a common Wi-Fi network.
As you'll notice here, the same operation takes anywhere from a couple of milliseconds to hundreds of milliseconds based on the I/O medium you're interacting with.
And the reason I/O is so important is that the I/O performance of your application has a direct impact on user experience.
Latency variations in your app's performance can show as responsiveness issues.
Since I/O is a shared resource in the system, your app's I/O performance could affect overall system performance.
And as we'll see shortly, I/O significantly impacts the battery life of the device.
Now to help you reason about the I/O usage of your app, we've come up with our own I/O philosophy, and the I/O philosophy has four main pillars.
Reduce the amount of I/O your application does, use the right thread to do these I/O's, adopt appropriate and efficient API's to do these I/O's, and, lastly, test and measure your application for I/O performance.
As we move through rest of the doc, we'll look at each one of them in further detail.
Now the best way to improve the I/O efficiency of your application is to reduce the amount of I/O it does.
Every I/O operation interacts with multiple hardware competence on your device.
Here's a simple block diagram of a modern device with some of its competence and their impact on battery life.
When your app is using I/O, it runs on code on the CPU, accesses memory, and ultimately fetches data to or from the disk.
If the network is involved, the network-based radios are interacted with as well.
The combined power cost of all of these competents makes I/O a heavy operation in terms of battery usage.
Since I/O has such an adverse effect on the battery life of the device, let's take a look at a couple of best practices you can use to reduce the amount of I/O in your application.
And the first one is caching.
The main idea here is to create an in memory copy of your data rather than going out of the disk for every operation.
To decide if your data should be cached in memory in your application, you should look at the access patterns of your data.
Data which is frequently written to or updated might be a good candidate to cache in your application.
Also, data which [inaudible] from the disk needs an expensive processing step, for example, decompressing an image file might be a good example of data you should cache.
Having said that, you should be aware of the tradeoffs between memory and I/O.
Just like I/O, memory is a shared and limited resource in the system, and you should be careful in your user [inaudible].
If you do decide to create caches in your application, we would recommend using the NS cache API's since they handle memory pressure conditions appropriately for you.
The next best practice is coalescing your I/O's, and the main idea here is to defer your I/O's to a later, more suitable time in the system.
Due to the way I/O technologies work, larger, fewer I/O's are always more efficient for the system.
One of the ways to do that is to use the application app state change notification, for example, application data in the background to schedule your I/O's.
On macOS, you can use the centralized task scheduling API's to schedule your maintenance and backup tasks, and the system will figure out an optimal time to run these for you.
To learn more about these API's, we would recommend looking at WWDC 2014 talk Writing Energy Efficient Code.
Now that we've taken a look at a couple of best practices to reduce the amount of I/O in your application, I'd like to introduce our sample application which Terry and I have been working on, and we'll use this application for the rest of the talk to demonstrate the practical implications of some of these best practices.
And that app is called ImageBox.
ImageBox is our amazing app on iOS and macOS that lets you add and browse images.
For each image, it shows you a thumbnail, shows you associated badges such as favorites, or whether it has notes associated with the particular image.
When you tap on a particular image, it takes you to a detailed view, which lets you mark the image as a favorite, unfavorited, or add a note to it.
Now that we've created this app, we want to know if our app is I/O efficient and does well in terms of I/O performance.
So I'm going to talk about the tool that you can use to decide this for your own application, and the tool is the Xcode debug gauge.
So let's see how that works.
In order to use that tool, simply run your project from the Xcode UI.
This launches the project or the application on the device or the simulator.
Click on the Xcode debug navigator.
Now this shows you live data from all, from your application about all the system resources your application is using.
You have CPU, memory, energy, network, and disk.
Since we're interested in the I/O activity or I/O performance of our application, let's go ahead and select disk.
Once you do that, you notice that it shows you live data about the reads and writes being done by your application, and it looks like our application is doing a lot of writes every few seconds even though there is no user activity to it.
Now to investigate this further, we want to dig into instruments, and use instruments to find out what's happening.
So let's go ahead and click on profile in instruments, and hit the restart button.
Once you do that, instruments provides you a set of templates that you can choose from for analyzing your application.
And since we're interested in the I/O activity of our application, we go ahead and select system usage, and next hit choose.
Doing that opens a new instruments template that's ready to record the I/O activity for your application.
So let's go ahead and start recording.
As you now notice in the detail section, this template shows you all system calls being done by our application which does, which do I/O on your behalf.
It shows other useful information such as the actual and requested number of bytes for those reads and writes and the file path associated with them.
I'll go ahead and stop this recording now.
Now in order to find out the large writes that we were seeing in the Xcode debug gauge, we sort this data by the actual number of bytes that are being read and written and identify the large write that's near the top.
Once we have that, we can actually go into the extended detail viewer and see the exact backtrace of the piece of code doing these I/O's in our application.
It looks like it's our app delegate method.
Double clicking on that takes you to a source inspector which shows you the exact block of code doing these I/O's.
If you click on the Xcode icon in the source inspector, it takes you back to Xcode project and highlights the piece of code doing these I/O's for you.
So let's take a look at this piece of code in further detail, and the code in question is our implementation of the application didFinishLaunchingApp delegate method.
As part of its implementation, we create a new timer DispatchSource, schedule it to file every five seconds, and as part of the event handler for that timer, we write out our entire data store.
Now a lot of us write code like this because we want to make sure that the application data is being saved out consistently and regularly.
However, there's a more I/O efficient way of doing this, and to fix this code, the first thing that we'll do is eliminate the repeating nature of the timer.
So let's get rid of that.
Instead, we create a new method called dataStoreDidChange, which is culled from various places in the application whenever there is a change to the to the data store.
As part of this implementation, we push out the timer dispatch source by 15 seconds into the future.
This way, we collect all updates for our application's data store update and push them out into the future and coalesce and write them.
Once the timer eventually expires, it has basically collected a bunch of updates that were frequently done, and we'll write them out as a single I/O operation.
So let's see what these code changes do for our application.
We run the application again using the Xcode UI's run button.
Go to the debug navigator and select the disk gauge to find the I/O activity.
As you'll notice here that the application is not doing those writes anymore.
Since it's completely idle and the user is not interacting with it, this is exactly what we expect.
We've effectively coalesced the amount of I/O's that our application does, and improved its I/O efficiency.
Now that we've taken a look at a couple of best practices you can use to reduce the amount of I/O, let's take a look at what trends you should be using to do these I/O's, and for that, I'd like to invite Terry on stage.
[ Applause ]
So we just saw some great ways that you can reduce I/O in your applications to avoid negatively impacting battery life.
Now let's move on to the second pillar of our I/O philosophy.
I'll explain some ways that you can use threads and queues effectively in your application for great I/O performance and efficiency.
Every thread or every application on the system starts with a single thread called the main thread.
This thread is special, and it has a few primary purposes.
The first purpose of the main thread is to handle input.
So if I tap on a button in my application, the main thread is responsible for handling that input and responding to it.
Additionally, the main thread is responsible for updating your interface.
This is for doing things like drawing your views, doing layout, or animating.
When your main thread is idle, it's ready and available to respond to input or update your user interface.
But if you're doing other things on your main thread, such as executing lengthy tasks, this could be something like expensive image processing, doing this type of work keeps your main thread busy, which means it won't be idle, you won't be able to respond to input, or update your UI.
Additionally, what we'll focus on today, you should avoid doing I/O on your main thread.
As we've already seen, I/O is an expensive resource on the system that needs to be managed properly.
If you're doing I/O on your main thread, someone using your application could notice some problems.
The first example of this is on macOS.
Someone may see the spinning cursor.
The spinning cursor indicates that your main thread is busy and that you won't be able to interact with the application.
Additionally, on iOS, a busy main thread may appear as a frozen or just unresponsive application.
And, lastly, doing I/O on your main thread can cause issues for animations.
For example, if I do a large scroll in a table view in my application and then do I/O on the main thread to load in more data, the time that my application spends doing that I/O is time that it doesn't have to continue animating, which can cause issues like stutters.
I'd like to mention, again, the talk that Kushal pointed out earlier, Performance on iOS and watchOS.
This talk also has some great information about using your main thread effectively.
Now I'd like take a look at our ImageBox sample application, this time running on macOS.
I've been noticing an issue when trying to add images to the main collection view.
So let's take a look.
First from Xcode, I'll click on the run button.
Xcode launches my application, and then I'll click the add button on the right side of the toolbar.
Then I'll select an image from the open panel and click open.
As you can see, the open panel doesn't disappear, and we see the spinning cursor.
Eventually, the open panel disappears, and the image that we selected shows up in the main collection view.
So what might be going on here?
Well, as we already saw, the spinning cursor indicates that your main thread is busy.
So something must be running on the main thread that's preventing it from being idle.
So we won't be able to interact with the application.
We need to figure out what's going on, and to do that, we can use instruments.
Back in Xcode, we can choose profile from the product menu.
Xcode recompiles our application for profiling and then launches instruments.
This time, I'll choose the time profiler instruments template.
Time profiler is great for seeing how much time different parts of your code are spending executing.
So we can use this to figure out why our main thread is busy.
Now I'll click choose, and instruments opens a new, blank time profiler document.
By default, instruments time profiler only shows time spent while the CPU is actively executing code.
Other things like I/O aren't actively executing on the CPU.
The CPU's just waiting on the I/O to complete.
So to also see those types of operations in our instruments trace, first we need to click on the record waiting threads option under the record settings.
Now instruments will also show us time spent while we're doing things like waiting on I/O.
So let's get started and click the record button in instruments.
Instruments launches our application, and then I'll take the same actions that I took before to reproduce the problem.
First, clicking the add button, selecting an image, and hitting open.
Again, we see the issue.
So now we can hit stop in instruments and see what's going on.
Before I continue, I'd like to reduce some of the noise in this output by focusing just on the code that I've written and not any other system libraries.
And to do that, first, I can click on the display options on the right side of instruments.
Then click on hide system libraries.
Now instruments will only show me code that I've written and not any other system frameworks that I might be calling.
So now let's take a look at the main detail view of instruments.
Instruments shows all the different threads in my application, and the different time that they're spending executing.
In this case, we know that we're interested in the main thread.
So I can expand the main thread section and find the heaviest stack.
In this case, I can see that we have an open panel callback in our application, which is calling an add method on our data store.
That add method is then saving our entire data store out to disk.
And instrument shows us that saving is taking almost seven seconds, and that's really bad.
I happen to know that this save method is writing out a pretty big Plist, and that could be contributing to the problem.
Kushal will mention some ways later in the talk on how we can optimize our data store operations so that this is really fast, but for now, I'd like to focus on how we can fix this problem so that no matter how long that operation takes, our application is still extremely responsive.
To do that, let's take a look at the code.
Here I have the open panel callback.
It's waiting for a response.
Once it receives that response, it validates that it has URL that points to a valid image.
Then it creates a new item for our collection view from the image and tries to add it to our data store.
If that was successful, it tells the main collection view to reload its data so that we can see the image that we just selected.
As we saw earlier, and what instruments verified for us, calling that add method is expensive because it's saving out all that data to disk.
So let's see how we can fix this.
To recap, our application has a main thread.
The main thread is running the open panel callback.
That callback then calls the add method on our data store, and this is where we see the spinning cursor.
Once that work is done, we finally update our main collection view, and this is obviously not what we want.
This entire time, the main thread is busy, and we can't interact with our application, and we can't update any UI.
So one way that we can fix this is by using Grand Central Dispatch, or GCD.
With GCD, we can create a new dispatch queue.
Dispatch queues are a way to run code concurrently to the main thread.
We can use this to move our expensive I/O related work onto this queue, leaving the main thread idle.
To do that, we can call the async method on the queue and push that expensive work onto our queue rather than the main thread.
Finally, since UI related work has to happen back on the main thread, we can asynchronously dispatch back there to finally update our collection view.
And now this is exactly what we want.
Now the expensive I/O work is happening on a separate queue, which leaves the main thread idle, which means we'll be able to interact with the application and continue using it.
Let's see what this looks like if we implement it in code.
Here I have the same open panel callback from before.
To get started, first I can create a new GCD dispatch queue and provide a descriptive label.
In this case, I've created a queue that I can reuse for all of my data store operations.
Next, we can move the expensive work when we're adding that image onto this queue by providing that code as a block to the async method on the dispatch queue.
Finally, to update our UI, we can call dispatch queue.main.async, and pass it in a block that has all of our UI related work.
Now that we've done that, let's see what this looks like if we rebuild and run our application in Xcode.
So first I'll click the run button, wait for the application to launch, and then try adding an image again.
Click the add button, select an image from the open panel, and then click open.
As you can see, the open panel disappears immediately, and we can continue interacting with the application and adding more and more images.
You'll also notice that I've added some placeholder images in the main view.
This is just to give an indication that we're currently processing that data and saving it out to disk.
Once all that data is done being added and saved, all the images show up in the main collection view, and now this entire time, our main thread was idle, which means their application was extremely responsive, and that's exactly what we want.
So now that we've moved that work from the main thread off to a dispatch queue, we should consider telling the system the intent of that work so it can manage resources on our behalf, and to do that, we can use something called quality of service.
Quality of service is a way to tell the system the intent of the work that you're performing so that it can properly manage resources like CPU or I/O.
It manages these resources among the different processes running on the system and the different threads within your own application.
When thinking about quality of service, keep in mind three attributes of the work that you're performing.
The visibility, importance, and expectation.
Ask yourself three questions.
Is the work that you're performing visible to someone using your application?
Secondly, what is the importance of that work?
Is that work required to complete before someone can continue using your application?
And, lastly, how long is that work expected to take?
Is this something that happens immediately or something that you might assume takes a longer amount of time?
Before I continue, I'd like to mention a talk from last year's WWDC called Building Responsive and Efficient Apps with GCD.
This talk goes into a lot of detail about GCD and how to use quality of service, and I highly recommend that you go watch it.
So once we've thought about these three attributes of our work, we're ready to choose from one of the four quality of service classes.
The first quality of service class is user interactive.
User interactive is designated for your main thread.
This is for doing things like responding to input and animating.
All other work that happens asynchronously from the main thread should be using one of the other three quality of service classes, and the first of those is user initiated.
User initiated work is visible to someone using your application, and they're expecting immediate results from that work.
They probably also need that work to complete before they can continue interacting with your application.
A good example of that is if I click on a button to switch to a new view, I may need to load some resources on a different queue in order to display that view, and that work should be happening at user initiated.
The third quality of service class is utility.
Utility quality of service is often associated with things that have progress bars or other activity indicators.
This work generally takes a longer amount of time, and it's something that's still visible to someone using your application.
A good example of this is rendering a movie.
This is something that doesn't block someone from continuing to use your application, but it's going to take a longer amount of time to complete.
And the final quality of service class is background.
Background work is not visible to someone using your application.
In fact, they may not even be aware that it's happening.
A good example of that is indexing work.
Indexing is usually important for the performance of your application, but it's not something that someone using your app is aware of.
All of these quality of service classes are important because if you, when you choose the quality of service class, it helps inform the system how it should manage resources so that less important work like background operations and indexing doesn't adversely affect more important work like animating, even if that work is happening in a different process.
So once we've chosen from one of the quality of service classes, there are two main ways that you can specify quality of service in your applications, and the first way is by supplying an optional QOS parameter to the async method on the dispatch queue.
In this case, I've specified QOS background.
This means that when the supplied block of code is running asynchronously, it will be using the background quality of service.
Additionally, if you're using the operation queue or operation API's, both of those have a quality of service property that you can set, such as utility.
So now that we know a little bit about quality of service and how we can specify it, let's go back to our ImageBox application, and see if we can choose an appropriate quality of service for adding images.
And to do that, we can think about the three attributes of this work: The visibility, importance, and expectation.
Well, adding an image is something that is visible to someone using our application, but it's not necessarily required to complete before we can continue doing other things like browsing images or adding more images.
Additionally, because we are showing that placeholder image, we've given an indication that this is an operation that could take a longer amount of time.
For all of those reasons, the utility quality of service may be an appropriate choice for this work.
So now that we know some ways that you can move expensive work, like I/O, off of your main thread and onto a separate dispatch queue and how to specify the intent of that work using quality of service.
Let's take a look at the third pillar of our I/O philosophy, adopting appropriate API's, and the first one that I'd like to mention is Asset Catalogs.
If you're not already aware, Asset Catalogs are a way to easily manage resources in your application, like images.
They're used to store things like your app icon and launch images and also all of the images for the different devices that you support and scale factors, like retina or non-retina.
When building games with SpriteKit, Asset Catalogs are also the way that you guild Sprite Atlases.
And you can use Asset Catalogs to tag resources for use with the on-demand resources feature.
And another good example of how you can use Asset Catalogs is for storing resources for your watch complications.
So why are Asset Catalogs great for I/O?
Well, Asset Catalogs have some great storage efficiency properties.
First of all, because Asset Catalogs store all of their images in a single optimized format rather than many individual files, you can have a lower on-disk footprint by using Asset Catalogs.
Additionally, with features like app slicing on iOS, when you download an app from the App Store, it uses the metadata in your Asset Catalog to determine which resources it should download to your device.
For example, if I download an app to my iPhone, the App Store knows that it doesn't need to download any resources for an iPad or for any iPhones with different screen resolutions, and this can save a lot of space on my device.
Furthermore, Asset Catalogs can be great for performance.
Because of this optimized format that they're stored in, image loading can be faster.
And if you're using them to make Sprite Atlases for your games, since cheap user much better at managing a single larger resource rather than many tiny resources, these Sprites Atlases can improve your texture rendering times.
And, lastly, if you're using Asset Catalogs on hard-drive machines running macOS, you can also improve your app launch time.
In fact, we've seen up to a ten percent improvement in app launch time on these machines just by switching to Asset Catalogs.
And you might be thinking to get such a big performance improvement, this must be difficult or time consuming to switch to Asset Catalogs, but, in fact, if you're already using the standard NS image and UI image based API's, switching to Asset Catalogs is easy, and I'd like to demonstrate that now with an example project.
Here I have a project that hasn't yet adopted Asset Catalogs.
To get started, first we can choose new file from the file menu.
Then, from the resource category, select Asset Catalog and click next.
Xcode creates a new, sorry.
When prompted, enter a name for your Asset Catalog and the location.
Then you can click create, and now Xcode creates a new blank Asset Catalog in your project.
To move all of your existing assets from your project into this new Asset Catalog, first open the add menu at the bottom of the screen, and choose import from project.
Xcode displays a list of all of the images in your project, and when I click import, it will move all of these into my new empty Asset Catalog.
Xcode automatically figures out which images are for which devices and which scale factors.
Now when I rebuild my application, it will be using this new Asset Catalog, and that's it.
It took less than a minute, and I didn't have to change a single line of code.
So it's really easy, and I highly encourage you to adopt Asset Catalogs today if you haven't already.
One more thing I'd like to mention with Asset Catalogs is a new feature this year, and that's image compression.
By default, images in your Asset Catalog are lossless, but new this year, you can choose from one of the lossy image compression formats.
These formats have hardware accelerated decompression.
So they're really fast, and because of the compressed format, they can result in lower memory footprints.
If you have a lot of assets in your application, you may benefit from the potential memory and space savings by using image compression.
So let's see how we can use image compression back in the project that I just converted to use Asset Catalogs.
First, let's click on an image in our catalog.
Then open the utility sidebar on the right-hand side.
And then click on the attributes inspector.
New in Xcode is a compression popup menu.
When I select that, it displays all of the available image compression formats.
In this case, I'll choose lossy automatic so that Xcode can choose a good format for me.
So that's a little bit about how you can use Asset Catalogs in your applications, adopt them, and use the new image compression feature.
Now I'd like to hand it back to Kushal, who's going to tell you more about some other API's that you can adopt for storing your data.
[ Applause ]
Asset catalogs are an easy and efficient way to manage your app's assets.
Another thing that a lot of us think about is how and where our application data lives on device.
A lot of us are familiar with the serialized data formats.
For example, Plists, XML, and JSON.
The reason these data formats are popular are because of their simplicity and ease of use, and they have been made popular as data interchange formats in a lot of web-based services.
These data formats are good for small read-only data such as configuration information in your Info.plist file.
However, they are not a database, and the biggest reason they are not a database is that minor updates to these files causes the entire data file to be written out of the disk, which is really bad for I/O efficiency.
For all your data storage needs, we would recommend using Apple SQLite database framework core data.
Core data is the, is a Cocoa application development framework for managing your application data.
It handles your data persistence by using SQLite as a backing store.
It automatically manages objects, objects graphs, and relationships between those objects to allow you to manage your data easily and efficiently.
It also does change tracking, will let you do, undo, and redo operations on your data models.
And core data is completely integrated with the Xcode tool chain so that you can build and visualize your data model directly from the Xcode UI.
Now that we're aware of this amazing tool and framework to use for designing or writing our data model, let's think about how to design our data model.
And the best way to do that is to base your data model on the UI needs of your application.
Let's go back to ImageBox, which up until now has been using a giant Plist to write out all the files and all the images associated with the application, and instead move it to a core data model.
Now if you think about the application, there are two main entities for ImageBox.
The first is the list of items that's there in the collection view, and secondly is the notes associated with each of these items.
So let's go ahead and put them in a table of their own.
And the first table is BoxItem, which represents a particular item in the collection view, and the second table is notes, which represents the notes itself.
The BoxItem table contains a Boolean which represents whether the image is a favorite or not and contains the full resolution image of the image that you need to represent.
The Notes table contains a note body for all notes associated with the BoxItem, and we relay these two tables using a simple one is too many relationship.
Now when we use this data model and looked at the performance of our application, we noticed that the app launch performance was really slow.
We investigated using instruments, and we found out that app was spending most of its time fetching the core data model on the launch pad.
So we need to take a look at application launch performance from core data's perspective, and, luckily core data lets us do just that.
It has a set of tools that let you investigate how core data is doing on your behalf.
For example, you can set a launch argument on your application which is Apple.CoreData.SQLDebug with a velocity level that lets you see how core data is interacting with its SQLite backing store.
The core data instruments template lets you see [inaudible] patterns in terms of fetching and loading too much data.
And, lastly, the standard set of SQLite query analysis tools, for example, explain query, are available which lets you dive deep into a particular query and find out its performance.
To learn more about these tools, I would recommend you watching last year's WWDC doc What's New in Core Data.
Now that we have these tools, let's use one of them to find out what's wrong with our data model.
In order to do that, click on the project and click edit scheme.
In the window that opens, we're going to select the arguments pane, and then add a new argument com.Apple.CoreData.SQLDebug at the highest velocity level of 3.
Once you are done with that, we'll go ahead, and click close.
And now we simply launch our application from the Xcode UI.
This should rebuild your project, load it, and launch the application.
As you'll notice here, the console shows various logs from core data about its performance in terms of the data model.
Another thing you should notice is that the app is taking multiple seconds to launch, and it still hasn't finished launching.
We see some more data from core data on the log output, and, finally, the app launches.
If you go back to the Xcode UI, you can dig through all of these logs and figure out what was wrong with your data model.
So let's go ahead and do that for our application.
Now one of the first logs that you see here is that core data is doing a fetch of all the rows from the SQLite database for the BoxItem table, and it, that is exactly what we expect.
However, the next log tells us that that fetch took almost nine seconds, and that's really bad, and one of the biggest reasons of our app launch slowness.
Now if you go back to the previous query that was executed to fetch all this data, one thing you'll notice is that we are fetching the full resolution image for each of the items in the BoxItem table even though we just show thumbnail images on the launch screen.
Moving on, we also notice that core data is doing a join between the BoxItem table and the Notes table for every item it fetches from the BoxItem table.
And the reason it's doing that is because there is a one is too many relationship between these two entities, and we need to show a UI batch in the launch screen to represent whether there are notes associated with the BoxItem.
So let's go ahead and fix our data model.
The first thing we'll fix is to avoid the join between these two tables, and the reason the core data was doing this join again was because it needs to show the, and we need to show the UI for whether notes are present with the BoxItem or not.
So to improve this model, we can simply add another field to the BoxItem table, which is called notes present.
The presence or absence or [inaudible] for false value of this particular field tells us whether we need to put a UI badge on the launch screen.
The next problem with our data model was that we were fetching the full resolution images at the launch screen.
So let's go ahead and fix that.
We replace the image data with thumbnail data, and instead move the full resolution image data into a table of its own, and we link these two tables by a simple one is to one relationship.
Now as lot of you know, these images can become really large, and it might be a good idea to store these images as a separate file on the file system rather than putting them in the SQLite database.
So we're going to replace the full resolution image being part of the database itself with the image URL and store the images directly on disk.
Now let's look at the launch performance of our application once we made these changes.
Again, we run the project from the Xcode UI with the new data model that builds it and launches it.
As you saw there, the application launched four to five times faster just by changing the data model.
So basing your data model on your UI needs has significant impact on your launch and overall performance of your application.
Now that we've taken a look at ways you can reduce and optimize the amount of I/O's your application does, let's see how you can test your app for I/O performance.
One of the things we recommend is to test your app on a variety of devices.
If your app shifts on this, on multiple platforms, it might be a good idea to test your application on a variety of devices from all those platforms.
Even if your app shifts on a single platform, it might be a good idea to test it across generations because I/O characteristics vary widely.
Now another thing that can vary between your environment and probably your app user environment is the network condition, and to help you test your network conditions or the worst-case network conditions, we provide a tool called network link conditioner.
In order to get to the network link conditioner, open the settings app, scroll all the way to the bottom to get to the developer settings, and tap on developer settings which brings you to this menu.
Now as you see here, we have the network link conditioner, and tapping on that opens up this menu which shows various kinds of profiles you can install on your device.
We have 3G, high latency DNS, and my favorite, very bad network.
So let's go ahead and use that by picking very bad network and enabling it with using the toggle switch on top.
And that's it.
Your device will now behave as if it's in a very bad network, and you can test your application against it.
Another factor to remember is that I/O is a shared resource on the system.
So the I/O performance of your application might be impacted by other system resources or other I/O's happening in the device.
For example, if there are other applications that are running due to multitasking, your app's I/O performance might be affected.
So it's a good idea to test your application in the presence of other apps.
Also, the system tries to maintain a fair balance between its memory and I/O usage, and under memory pressure conditions, your I/O latencies might be affected.
So we would recommend testing your app under memory pressure conditions as well.
Lastly, the system maintains a bunch of caches by default on your behalf to help you access and store your data better.
The state of these caches could affect the system or, and I/O performance of your application.
And to test the worst-case behavior for that, we would recommend rebooting your device on an iOS device, and on macOS, you can use the merge command which flushes all these caches and simulates worst-case behavior for our application.
To make sure that your app is robust against all these environmental variations, we recommend following the I/O philosophy to reduce an optimize your I/O's.
So here are some key takeaways from the talk.
Reduce the amount of I/O's your application does since that significantly impacts battery life.
Move your I/O heavy workload off the main thread and keep that main thread idle for UI and animations.
Specify proper quality of service to specify the intent of work you're performing.
Switch to Asset Catalog since they're an easy and efficient way to manage your app's assets.
Use core data for all your database needs, and, lastly, test and measure your app for I/O performance.
For more information, go to www.apple.com, and the session ID is 719.
Here is some related sessions that happened during the week that you can refer to for more details on the API's and tools we mentioned.
And thanks for your time.