Advances in Research and Care Frameworks 

Session 205 WWDC 2018

HealthKit, CoreMotion, and other iOS frameworks combined with the ResearchKit and CareKit open-source projects, provide a deep platform for the creation of game-changing apps for care teams, researchers, and the medical community. Discover new active tasks that leverage calibrated device data and how new CoreMotion APIs deliver insightful results capable of assisting diagnosis and improving care. Hear about updates and contributions from the open-source community and gain practical guidance you need to rapidly deliver your next research or care app.

[ Music ]

[ Applause ]

Thank you all for coming.

And welcome to the session, Advances in Rresearch and Care Frameworks.

My name is Srinath and I’m a software engineer.

Now, if you’re familiar with our WW talks in the past, you might remember that our sessions have focused primarily on our two open source health frameworks.

ResearchKit and CareKit.

This year, although we’ll still be talking a lot about these frameworks, we’ll also be covering some new APIs and features that we’ve been working on in the larger health space.

But for those of you who are somewhat new to these frameworks, I highly recommend checking out our two sessions from last year.

What’s new in CareKit and ResearchKit was hosted by Sam Mravca.

Where she gives a really good overview of both ResearchKit and CareKit.

The other session, Connecting CareKit to the Cloud was by Kelsey Dedoshka, where we introduced a new CareKit bridge API that allows developers to sync care plans between patients and providers using any HIPAA compliant backend.

And on that note, I’m really excited to show you all some of the amazing work that’s been happening at Penn Medicine.

The Penn Life Gained Appl utilizes CareKit and the CareKit bridge API to help patients through both pre and post-bariatric surgery at Penn Medicine.

The app leverages care plans, healthcare data and other interactive components to help patients go through their weight loss journey.

Data is constantly being synced between the iOS apps used by the patients, and the iPad apps used by care providers in order to closely monitor and interact with their patients in real time.

The team has received extremely positive feedback from both patients as well as clinicians with regards to the impact that these apps have had on their bariatric program.

And now that we have touched upon where we have come since last year, I would like to take a quick step-out to all of our health frameworks as a whole.

The creation of these frameworks stems from our overarching desire to improve the world of health, through technology.

And in that process, we really want to empower our users and provide more tools to developers and researchers, so at the end of the day, you and all of us can help improve and advance two core areas of interest, research and care.

This year, I’m really excited to be talking to you about some of these topics which we care so deeply about in a variety of ways.

Starting with the framework focus, where I’ll talk to you about updates we have made to our ResearchKit framework.

From there, we’ll move on to a more condition focused approach, specifically around movement disorders like Parkinson’s, where Gabriel will come on stage and talk to you about this brand-new API.

And finally, we’ll bring it all together with a demo where we will showcase how you can utilize these new APIs and features directly in code.

Now let’s get started with ResearchKit.

Over the past year, we have been putting in a lot of effort to improve out openness and engagement with our community.

Some updates to our ResearchKit UI and modules, as well as some new additions to our existing library of Active Tasks.

Now, let’s get started with community updates.

I want to cover two main topics, repository privileges and schedule updates.

Over the past few months, we have been expanding our access rights and providing write privileges to members of our community.

In fact, we’ve chosen five all-star contributors and given them direct access to the ResearchKit repository that also allows them to merge MPRs.

A huge thank you and congratulations to Erin, Fernando, Nino, Ricardo, and Shannon.

Next, I want to touch on our schedule updates.

Historically, we’ve been pushing to master and stable at the same time.

Now, we realize that this inhibits the ability for developers, like you, to leverage some of our in-house features like localization, accessibility, and QA, which is why this year we will be pushing to stable two to three months after our push to master.

We hope that you can use this time to check out our latest updates, provide feedback and submit PRs.

And the changes that you make, will make it soon into our stable, tagged release branch as opposed to you having to wait for an entire release cycle.

And now that we have touched upon community updates, I would like to dive a little bit deeper and talk about updates we have made to the ResearchKit framework itself, starting with UI updates.

Now, for comparison, this is what our Mini Form step looks like in ResearchKit 1.5, as taken from our ORK test app.

And here is the same Mini Form step as it will be available in ResearchKit 2.0.

As you can see, we have worked very hard to update the overall look and feel of ResearchKit UI to closely resemble the latest iOS style guidelines.

Let’s take a closer look.

We’ve moved the progress label from the center of the navigation bar to the right side and applied some styling.

This allows us to leverage the large titles feature in nav bars and apply it to all of our step titles.

Now, to keep the consistent styling, we are also adding a new card view in order to improve the overall user experience for those who are answering multiple questions and surveys, as it now provides a clear break in action that you’re requested to perform.

Now, this card view is applied by default to all our steps and forms.

But we are also exposing a Boolean property that you can set to falls for backwards compatibility.

And finally, we are also adding a footer container view, to improve the navigation flow.

The cancel button is now part of the footer view, which is always sticky to the bottom.

What this means is that your users will no longer have to scroll all the way to the bottom of the step, in order to access forward navigation options.

And having all these controls in one place makes it much more intuitive.

And now, moving on, one of the most commonly used modules in ResearchKit is informed consent, which is used to generate a PDF and attach a user signature to it.

Now, we realized how important it is to be able to survey some of these confidential documents to your users within the context of your app.

Which is why we are adding a new ORK PDF viewer step.

This is built on top of the PDF Kit framework that came to iOS just last year.

Let’s take a closer look at some of the functionalities that this step provides.

Quick navigation to easily switch between pages.

Real-time annotations to mark up the document if necessary.

Search functionality for your users to query the entire document for keywords or phrases.

And the ability to share this PDF or save it using the standard iOS share sheet.

And what’s even better is how easy it is to incorporate this in your app.

You create an instance of ORK PDF viewer step with a unique identifier, and simply provide us the file path to the PDF document that you wish to display.

And now, I’d actually like to switch gears and talk about one of the core components of ResearchKit.

Active Tasks.

For those of you who are unfamiliar, Active Tasks are prepackaged modules that allows users to perform certain tasks or take certain tests in a given amount of time.

When the user completes the step, the developer receives a delegate call back with an ORK resolved object.

Now, this object consists of a variety of data points, which include things like user responses, timing information and data recorded from different sources like Accelerometer, Gyroscope, healthcare data, and even from your microphone.

And this year, we are also adding support for health records.

Now, let’s take a look at how you can utilize this in your app.

Now, your objective here is to create a recorder configuration that can query health clinical data types.

And you need to provide us with two important parameters.

First is of HKClinicalType, and a second is an optional of type HKFIRResourceType.

And once you have created this record of configuration, you attach it to the step.

So, now when the user is about to perform the task, they will be prompted with HealthKits new authorization UI.

And only if they grant access will we be able to run the query.

And once the user completes the task as part of the delegate call back in your ORK result object, you will also have information that you requested for from health records.

Now, to understand and leaner more about these different health records types, I highly recommend everyone here, to also visit the session, Accessing Health Records with HealthKit, that is happening right here at 3 p.m., where they will talk about everything related to health records in great detail, including some very important best practices.

And now that we have covered updates to the Active Task module as a whole, let’s talk about the Active Tasks themselves.

This year, we are adding new modules focusing on three main areas of health; hearing, speech, and vision.

Let’s get started with hearing.

We are adding a new dBHL tone audiometry step.

This implements the Hughson-Westlake Method and allows you to determine the hearing threshold level of a user in the dBHL scale.

And to facilitate this, and ensure that you get the most accurate results.

I’m really excited to say that for the very first time we are open sourcing calibration data for our AirPods.

Now, this comes with three tables.

The first one is a volume curve for AirPods on all iOS devices.

The second one is the sensitivity per frequency.

Where sensitivity is measured in decibels sound pressure level.

And finally, we are also providing the reference equivalent threshold sound pressure level tables or RETSPL.

Now, it’s really important to note that the RETSPL table is still in beta.

This means that we are actively running internal validations and tests.

And over the next few weeks, we will be updating these tables as we start converging on accurate data.

Now let’s take a look at how this Active Task actually works.

The user is required to hear tones at a particular frequency at varying levels of dBHL value.

When the user hears a tone, they are expected to tap the button to indicate that they have heard it.

And at that point we will start decreasing the dBHL value as indicated here by the green dots.

Now, when the user fails to tap the button after a given timeout period, we will start increasing the dBHL values as indicated by the red dots.

And over time we will feed these data points to the Hughson-Westlake Method in order to determine the hearing threshold level of the user in the dBHL scale.

Now, from a developer perspective, the entirety of tone generation occurs in three phases.

The first one, is a pre-stimulus delay, which is a developer-specified maximum number in seconds that we use to generate a random delay between one and that value before we start playing the tone to the user.

This is to ensure that the user cannot cheat the test by just randomly tapping on the button.

Next, we are also providing a property for tone duration, which governs the actual duration for which the tone will be played.

And finally, is the post stimulus delay, which is the amount of time allocated to the user to respond for that specific tone.

To incorporate this in your app, you create an instance of ORKdBHL tone audiometry step with a unique identifier and provide us values for some of the parameters, including a frequency list, which is an array of frequency that you wish to be played back to your user.

We’re also exposing more properties that you can further customize for your specific use case.

Now, when the user completes this task as part of the delegate call-back, the ORK result object will be returned to you.

Let’s take a look at what that object looks like for this particular task.

So, at the top level you get a variety of information that includes things like output volume, and also includes an array of sample objects.

These objects in turn encapsulate things like channels, which refers to whether the tone was being played in the left or right channel to the user, as well as the threshold value, as determined by the Hughson-Westlake Method.

This also consists of an array of unit objects.

And the unit objects in turn, provide details like the dBHl value at which that particular tone was being played, and a variety of timestamps, which also includes when exactly did the user tap the button.

So, now let’s move on to our next task that falls under the hearing category.

We are adding an environment SPL meter.

This implements an A-weighted filter, in order to measure the environmental sound pressure level in DBA, or in other words, it tells you how noisy it is.

Now, this step also accepts a threshold value, and this makes things interesting, because now you can use the step as a gating step.

So, for example, if you want your users to perform the tone audiometry task, you can add this task before that to ensure that your users are not in an environment that is too noisy to accurately perform the task.

To add this you create an instance of ORK environment, SPLMeterStep with a unique identifier and provide us the threshold value.

We’re also exposing additional properties that you can further customize.

And now, let’s take a quick detour and move on to our next category, speech.

We’re adding a speech recognition module, which leverages the speech recognition framework on iOS, now, this gives us direct access to a real-time speech recognizers that supports over 50 different languages, or locales.

As part of this task, the user is asked to either repeat a sentence, or describe an image.

And once they are done speaking, the users will be automatically taken to a next step, where they can edit the generated transcript.

So, for example in this case quick and fox were incorrectly interpreted as quite and box.

Your users can just tap on these words an edit them if necessary.

Now, it’s important to note, that as part of this task, we are returning to you a very, very rich dataset that consists of three main things.

A direct audio recording to what the user said, the transcription generated by the speech recognition engine, and the edited transcript by the user.

To add this to your app, you create an instance of ORK speechRecognitionStep, and provide us with either a UI image, or a string.

You can also customize the locale for this recognition.

And also, you can surface real-time transcription as the user is speaking.

Now, let’s take a closer look at one of the subsets of our result object.

This is of type FS transcription as surface by the speech recognition framework.

The formattedString provides the transcript, and the array of segment objects, essentially breakdown the transcript into substrings, as well as provide a confidence letter for each substring.

On top of this, they also provide an array of alternative string, as you can see here in this illustrative example.

Now, these results can be used to derive syntactic, semantic and linguistic features, as well as speaking rate in order to evaluate the speech patterns for various medical conditions, including cognition and mood.

Our next task, Interestingly, is a combination of speech and hearing.

The speech and noise Active Task.

This allows you to perform fully automated speech audiometry.

And conventional tone audiometry uses pure tones, which are essentially sign waves.

Now, there have been recorded instances, where users are able to clearly distinguish pure tones, but find it extremely difficult to distinguish words when they are intermixed with noise.

And this closely reflects real-world examples of early-stage hearing loss.

For example, when you are unable to understand what the person sitting in front of you is telling you in a noisy restaurant.

Now, before I go into further details let’s take a look at how this task actually works.

The actor tried three green pictures.

Now, as you notice an audio file was played to the user, and once it completes, they are immediately requested to repeat what they have just heard.

Now, the speech and noise Active Task uses audio files generated by combining five words from a close-set matrix, that was validated internally for things like word familiarity, uniformity, difficulty, and also to ensure that the sentences generated by the synthesizer were phonetically balanced and more importantly, consistent.

These files are then programmatically mixed with background noise at varying levels of signal to noise ratio, or SNR.

And developers will also have the option to set the value for the gain for all of the noise signals.

Now, speech reception threshold, is defined as the minimum value in SNR at which a user can understand only 50% of the spoken words.

Now, our vision for this test is over the next few weeks, we’ll be uploading over 175 different files that corresponds to 25 lists, each with 7 sentences.

And in the long-run, we want to be able to support this test for multiple languages, especially ones where currently speech and noise is not possible due to a lack of speech database or other testing resources.

So, if you are a researcher in this particular field and if you have a request for specific locale, I highly encourage you to reach out to us and we’ll do our best to accommodate your request.

To add this to your app, you create an instance of ORKSpeechInNoiseStep, and point us to the audio file that you wish to play.

You can also specify the gain applied to the noise signal.

And lastly, let’s touch on vision.

the Amsler grid is a tool that is used to detect problems in a user’s vision that can be caused due to conditions like macular degeneration.

Now, this test is conventionally performed in a doctor’s office on a traditional piece of paper, displaying a graphic like the one you see right here.

Users with perfect vision would see exactly this, whereas users who suffer from some conditions would start seeing distortions on this graph.

Now, please don’t panic if you’re seeing distortions right now.

That was intentional and it was added for dramatic effect.

Now, users simply have to point to places on the grid where they see distortions.

By replicating this grid, we are able to bring the functionality of this task to the users at home on their device.

Users simply have to annotate areas on the grid where they see distortions.

And we believe that developers can leverage some exciting iOS features like [inaudible] or the depth sensing camera to increase the experience of a user who is taking this particular task.

Now, all of these Active Tasks are really great for data collection and analysis.

But they are designed to be performed at a specified time for a given period.

And as we start to dig into specific conditions, we have realized that some of the more complex problems in health require constant monitoring, and therefore, introduce the need for passive, noninvasive data collection.

And to talk more about that I would like to introduce Gabriel up on stage.

[ Applause ]

Hello. My name is Gabriel and I’m here on behalf of the core motion team to introduce a new research API.

The movement disorder API.

As Srinath was mentioning, this is a passive, all-day monitoring API, available on Apple Watch, which will allow you to monitor the symptoms of movement disorders.

Specifically, two movements disorders which are relevant to the study of Parkinson’s disease.

Now, because of the targeted use case as a research API, you will need to apply for a special code signing entitlement in order to use this API.

This application process will be done through the Apple developer web portal, starting in seed two, though if you just can’t wait and I don’t blame you, there will be sample data sets as well as demo code available in the research kit, GitHub repository for this talk.

So, let’s start talking about those two movement disorder symptoms.

As some of you may know, Parkinson’s is a degenerative neurological disorder which can affect the motor functions of those with the disease.

One of the identifiable symptoms of Parkinson’s is a tremor.

And this API monitors for tremor at rest, characterized by a shaking, or a trembling of the body when somebody is not intending to move.

Now, there are treatments including medications, which can help suppress and control the symptoms of Parkinson’s.

However, these very same treatments can often have negative side effects.

Side effects such as dyskinesias.

And one dyskinetic symptom that this API is able to monitor for is a fidgeting or swaying of the body known as choreiform movement.

So, to recap, you have tremor, a symptom of the disease as well as dyskinesia, a side effect of the treatment.

Let’s take a quick look at what tools researchers and clinicians currently have in order to assess these symptoms.

Typically, these types of assessments are done in clinic.

A clinician will ask a patient with Parkinson’s disease to perform physical diagnostic tests in order to rate and evaluate the severity of their condition.

These ratings provide a quantitative, though subjected to the rater, measurement of their condition at that point in time, when they’re in the clinic.

In order to get a broader and more complete picture, patients are also encouraged to keep diaries where they log their symptoms manually.

However, this can be cumbersome for patients and some will understandably forget or be unable to describe the full extent of their symptoms every single day.

Wouldn’t it be great if there was a passive, unobtrusive way to monitor for these symptoms.

Well, by using the movement disorder API, researchers and developers like yourselves, will be able to build apps which collect these types of the metrics, continuously whenever a patient is wearing their Apple Watch.

Not only does this give you a quantitative measurement of those symptoms, displayed here as the percentage of the time observed.

But it also gives you a longitudinal analysis, where you can track changes to those symptoms over time.

These algorithms were designed and piloted using data collected from Parkinson’s patients in internal clinical studies.

And we hope that you can use these tools and these learnings to build new care experiences that improve the quality of life of people with Parkinson’s.

But before you can do any of that, you’re going to need to know how to use the API, right?

All right.

Well, let’s take a look at some code.

The first thing you’re going to want to do is request motion authorization from the user in order to use their movement disorder data.

Once you’ve done that, you’ll want to call the monitorKinesias function in order to enable symptom monitoring.

Now, this symptom monitoring does turn on additional Apple Watch sensors, so this will have an impact on the battery life of your users, though they should still be able to have a days’ worth of data collection on a single charge.

As you can see, the maximum recording duration is seven days.

I know that many of you are going to be conducting studies that are longer than seven days, and if that’s the case, simply call the monitorKinesias function again in order to extend your data collection interval.

This will begin to store tremor and dyskinesia results on the user’s device, on your behalf.

At a certain point afterwards, you’re going to want to return to that application so you can query for those records.

Let’s take a look at what the query function looks like.

As you can see, in this line, recording for any new tremor records that were stored on the device since our last query date.

These records are stored on device by the API, but they also will expire after seven days.

And so, before it expires, you’re going to want to take ownership of those records either by serializing them and storing them on this device yourself, or transferring it to a different platform so you can visualize it and analyze it.

The data will be returned to you as an array of minute long object results.

So, one hour’s worth of data, 60 result objects.

Let’s take a look at what one of those result objects will look like.

As you can see, the result objects return the percentage of the time the percentage of that one minute that the algorithm was able to observe the presence, or the absence of the symptom.

For dyskinesia, here on the right, you can see that that’s pretty simple; unlikely or likely.

tremor gives you a few more options.

Let’s go through them.

Since this is a tremor at rest any active or chaotic motion will simply be returned as percent unknown.

And this is the same category that we use for low signal levels we’re unable to make a determination.

However, if the algorithm is able to make a determination, it will also return the severity of the tremor, ranging from slight up until strong.

Now, to show you just how well the passive monitoring of the movement disorder API is able to work in conjunction with the active monitoring of a ResearchKit Active Task, I would like to invite up to the stage Akshay, who is going to integrate both into one stellar research application.

[ Applause ]

Hello, everyone.

And welcome to the Advances in Research in Care demo.

In this demo, we’ll see some research get update and also implement the movement disorder API.

As part of our research kit repository on GitHub, we have already added an ORK Parkinson’s study app.

This app implements the movement disorder API and also visualizes the tremor and dyskinesia symptom data points that we just saw.

Let’s go ahead and see what this app looks like right now.

We have an Apple Watch app where we implement the movement disorder API, collect the tremor and dyskinesia symptom data points and send them to our phone app.

In our iPhone app, we have a questionnaire, a few Active Tasks, and also visualize these tremor and dyskinesia symptom data points.

Let’s look at what the code looks like, and for this demo, we’ll start with the basic bald plate application and try to recreate this app.

Here’s my Xcode workspace, and as you can see, in my ResearchKit, I now have ORK Parkinson’s Study App.

We have a task list viewController, where we’ll be adding all our Active Tasks and questionnaires.

A graphviewController, where we’ll be visualizing these tremor and dyskinesia symptom data points.

And an assessment manager, where we will be implementing the movement disorder API.

When a Parkinson’s disease patient, or a PD patient visits their doctor, they’re asked a certain set of questions, and these questions include questions about their activities of daily life, something for example on a scale of 0 to 10, how is your pain level today.

Or, what kind of non-motor symptoms are you feeling?

We have already added a subset of such questions in our app.

Now, these questionnaires are usually followed by seven physical tests.

And one of the physical tests is assessing the clarity of speech.

So, let’s go ahead and add the speech recognition Active Task.

In my task list view controller, I’ll go ahead and add the speech recognition Active Task.

And as you can see, we just added an ORK ordered task of type speech recognition.

And if you notice, one of the parameters is the speech recognizer locale, which is an item provided by ResearchKit that represent all the locales that are supported by speech recognition API, so that you, as developers don’t have to worry about conforming if your locale is supported by these speech recognition API.

Now, let’s move on to our assessment manager.

As Gabriel mentioned, we have a variable call manager, which is of type see and movement disorder manager.

And if you notice, we have a function that calls the monitorKinesias method for the maximum duration of seven days.

Let’s call this matter in our initializer.

Now, whoever creates an object of a type assessment manager will simply start the tremor and dyskinesia symptom query.

Once we have collected these data, we need a way to query these data points also.

So, let’s go ahead and add a method that record these data.

I add a new, query new assessments method, where I’m calling the query tremor method for a given start date and an end date, and the query dyskinesia symptoms method, for the same start date and the end date.

For this demo, we have already ran this query and collected the tremor and dyskinesia symptom data points and saved them as part of JSON file.

Let’s go ahead, use those JSON files and create ResearchKit graphs on them.

I’ll move over to my graph view controller, and here, as you can see, I have a create graph method, which reads the JSON files and creates ResearchKit graphs from them.

Let’s call these methods in our view data log.

Perfect. Now, let’s run this.

As you can see, we added this speech recognition Active Task, and added the movement disorder API.

Here’s what our Parkinson’s study app looks like.

We have the questionnaire on top.

Let’s quickly go ahead and run through one questionnaire.

As Srinath mentioned earlier, we now have a card view for all these survey items.

And also, all these steps in ResearchKit adhere to the iOS paradigm.

Let’s quickly finish this questionnaire, and that’s it.

Now, let’s move on to the speech recognition Active Task.

The first two steps talk about how to use the speech recognition step.

And as soon as I press the start recording button, I would be repeating the text that I see.

A quick brown fox jumps over the lazy dog.

I am directed to the next step, which is the other transcription step.

And as Srinath mentioned, this step is optional and could be replaced from the task by setting the property allow edit transcript to no.

Perfect. Now, let’s look at the graphs that we created off of tremor and dyskinesia symptom data points.

Since the ResearchKit graphs look really nice in the landscape mode, I’ll quickly turn my phone, and let’s look at the graphs.

Here as we can see, we have all the tremor and dyskinesia symptom data points for a particular day starting from 7 a.m. to 6 p.m. We can see tremor slight, mild, moderate, and strong.

And also, dyskinesia likely.

Perfect. With this, I would like to call Srinath back up on stage and continue with the session.

Thank you.

[ Applause ]

Thanks for that great demo.

So, now let’s take a look at a quick recap of what we went over today.

We started off by talking about updates that we have made to our community by expanding privileges and also updating our release schedule.

We showcased the look and feel of ResearchKits new UI, and we also added some new Active Tasks focusing on three main areas of health.

Hearing, speech and vision.

Gabriel spoke to you about the new movement disorder API that’s available on the Apple Watch from WatchOS 5.

And now, we’ll look to all of you as current or new members of the community to continue to engage with us and provide feedback.

We also encourage you to take advantage of our new release schedule so that way you’ll be able to leverage some of our in-house features like accessibility, localization and QA.

And on that note, as we continue to expand on our library of Active Tasks, we look to all of our developers, researchers and health professionals to help us improve on these.

These Active Tasks are just building blocks, which we hope you can utilize to create much bigger research studies, care plans and treatment mechanisms.

And as you do, we encourage you to contribute back to ResearchKit, so we can continue to improve our foundation and expand on the breadth of Active Task that’s available for everyone to use.

For more information about ResearchKit, please visit our website

For additional information about the talk, you can visit the following URL.

I also encourage you to stop by our lab session today.

Our teams will be down there and will be really excited to answer any questions you have, as well as discuss more about some of our new updates.

And finally, we really look forward to seeing what you all do with some of these new updates in the coming days.

Thank you.

[ Applause ]

Apple, Inc. AAPL
1 Infinite Loop Cupertino CA 95014 US