Sensing Device Motion in iOS 4 

Session 423 WWDC 2010

The Core Motion framework provides a consolidated programing interface for all things motion – providing high-fidelity data that lets you determine precisely how the device is oriented and moving in 3D space. Come discover how you can create incredibly immersive user experiences using only your device’s motion for input.

Chris Moore: Good morning.

My name is Chris.

So Motion has been a huge topic in iPhone OS since day one.

We love it.

You guys love it and most importantly customers love it.

Now, as Steve hinted out on Monday, we have some great new Motion sensing capabilities that we’re bringing to the table in iPhone 4 and iOS 4 and will talk about those in this session.

Will tell you what those features are.

How you can access them in your code.

Then, we’re really deep dive into some of the features and give you a good understanding of what they are.

Finally, we’ll write some code.

But before we get to any of that, let’s talk about an idea.

This is an idea that one of my coworkers proposed a few months ago and I like it.

He said, wouldn’t it be neat if we could write the game on iPhone where, this is a boxing game, where in order to punch, you rotate the phone left and right and the game would be smart enough so that if you did sort of a level rotation, the character would give a level punch.

And if you did a more upward rotation, it would do an upper cut.

Furthermore, wouldn’t it be neat if we could have the user translate the phone left and right or perhaps backward to dodge.

I think this would be a really neat app to see in the app store.

And with the in mind, let’s talk about where we are today with Motion sensing in iPhone OS.

So as you’re all aware, every iPhone OS device that we’ve ever shipped has an accelerometer and this measures the sum of gravity plus whatever acceleration the user is giving the device and we call that second component user acceleration.

Now, these two things are useful for measuring different physical actions.

So you can use gravity to measure rotations.

When the user rotates the phone left and right or forward and backwards that will show up as a change in the direction of gravity with respect to the device.

Similarly, we can use user acceleration to measure shakes or translational motion.

Unfortunately, in order to try to disambiguate gravity from user acceleration, we need to filter.

We need to use a low-pass filter to isolate gravity or rotational motion or a high-pass filter to isolate user acceleration or translational motion.

And unfortunately, that filtering introduces some side effects.

So the more heavily we low-pass filter, for instance, the more user acceleration we’re able to remove and so the more stable our signal is.

However, the more latency we introduce.

Now, one other disadvantage of using the accelerometer is that it cannot detect rotations around gravity.

So to help make that a little more clear, let’s imagine that we have an iPhone that’s flat on the table so that gravity in this case is going into the screen.

And we spin that while keeping it flat on the table.

That would be a rotation around gravity and that would not be picked up by the accelerometer.

So with that in mind let’s go back to this boxing idea.

Can we do this with only the accelerometer?

So what would we need to do this?

Well, we’d really want to be able to measure rotation around gravity.

That’s because that sort of the natural most intuitive user motion to trigger up a punch.

Unfortunately, we can’t really do that with just the accelerometer.

We also want this to be able to accurately measure rotation when the user is rotating the device very fast.

Fast rotations are kind of the most natural action for punching.

And unfortunately, this is a place where the accelerometer has particular trouble.

Because the faster the user rotates the device, the harder it is for them to hold it steady.

And so the more user acceleration, we tend to have bleeding into that accelerometer data.

Now, as I mentioned, before we can combat that by using a low-pass filter.

But unfortunately, the more heavily we low-pass filter the more latency we introduce and fast rotations are exactly when that latency is most noticeable.

Furthermore, we’d want this to accurately be able to measure rotation in the phase of high translation or user acceleration and vice versa.

So you could imagine the user dodging and punching at the same time and we don’t want to be confused with the other.

So unfortunately the accelerometer is not a particularly good choice for implementing a game like this.

Now, starting in iPhone 3GS and now iPad and iPhone 4 we introduced a magnetometer.

And this sensor has been incredibly useful.

We use it as a digital compass and that data is available through the core location framework.

Many augmented reality apps were basically made possible by this sensor.

And it helps us deal with that first bullet point that I mentioned on the last slide.

It can actually detect rotations about gravity.

Unfortunately, it’s not particularly responsive and not particularly well suited for high speed applications such as games.

The sensor itself is very noisy and it’s in an environment that’s very noise magnetically.

iPhone OS and iOS devices are just jam-packed with electronics that have an effect on this sensor.

So it doesn’t really help us for that game.

So that’s where we are today with motion sensing in iOS, in iPhone OS.

Let’s talk about what we’re introducing in iPhone 4, a gyroscope.

Now, this measures very accurately the rate at which the device is rotating about all three axes.

And by itself that raw gyroscope data can be incredibly, incredibly useful and we provide that to you through a new framework in iOS.

However, we went a step further.

Because that gyro data is much, much more useful when we fuse it with accelerometer data using algorithms that we have developed and provide to you in iOS 4.

And we call the output of those algorithms device motion.

So what does device motion include?

Well, it includes the full 3-Dimensional Attitude of the device and you also can might call this orientation.

Now, from this full 3D attitude, we can extract an estimate of the direction of gravity with respect to the device.

That’s much, much less noisy and much cleaner with respect to user acceleration than any low-pass filter could give us on the raw accelerometer data.

It also includes an estimate of user acceleration without using a high-pass filter.

And this estimate is much less susceptible to influence by rotational motion than any high-pass filtered accelerometer data would give you.

It also included the instantaneous rotation rate of the device.

Now, I mentioned that the gyro itself measures the rotation rate.

But the rotation rate that you get from device motion is slightly different from that and I’ll talk about that later on.

So, I mentioned that the estimate of gravity that we can extract from device motions attitude output is much better for measuring gravity than the accelerometer data is.

And it doesn’t need any low-pass filter to help.

So let’s look at why that is.

Now, there are really two reasons that you need to low-pass filter the raw accelerometer data to extract gravity.

One is to deal with noise from the accelerometer itself and then the other as I mentioned before is to filter out that user acceleration.

So let’s look at these two components in turn.

So if we look at the component of gravity along an axis that’s perpendicular to gravity, so let’s say we have a phone that’s sitting flat on the table and we look at gravity along maybe the Y-axis of the phone that’s parallel to the table itself.

Ideally, that would be all zeros.

Let’s look at what the raw accelerometer data on an iPhone 3GS would look like.

This is raw data straight from a phone, it didn’t’ do anything to it.

Now, one thing that might jump out at you is the data is quantized.

It’s really restricted to two values here.

And the reason is that the accelerometer in the iPhone 3GS and earlier products is relatively low resolution.

It has a resolution of 8 bits and we’re seeing that quantization in the noisy data here.

Now, if you’ve ever played around with the accelerometer on an iPad, you might have noticed that the data looks a little better than in earlier products and that’s true.

The accelerometer in the iPad is upgraded and one of the main ways that it’s better is that it has a 12-bit resolution instead on an 8-bit one.

And I’m happy to say that the iPhone 4 has the same upgraded accelerometer that’s in the iPad.

So what does the noise look like from this upgraded accelerometer?

Well, it’s a lot better.

But you can still see it even on the scale.

Let’s look at device motions gravity output along that same axis, very, very clean, very low noise.

No filtering needed because of that.

Let’s look at sensitivity to user acceleration now.

Again, this is the second reason that we need to filter the data to extract gravity.

So if we look at this, at the component of gravity along an axis that’s perpendicular to gravity, as we shake the device back and forth along that axis.

So we’re not rotating.

It’s a pure translation.

Again, ideally, that component would be all zero.

What do we see on the accelerometer data?

Well, it looks like this.

As expected that user acceleration shows up directly in the accelerometer data.

That’s what we need to filter out.

Now, I recorded this data directly from an iPhone 4 and while I was recording that raw accelerometer data, I also recorded the device motion gravity estimate along that same axis.

So it’s important to note that the motion the device was seeing is the same.

So the device motion gravity date looks like that, much, much cleaner.

No filtering needed to remove that user acceleration.

So with that in mind, let’s go back to our boxing idea.

Can we do this now?

Well, recall that we need to be able to measure rotation about gravity accurately.

We have that covered with the gyro.

We need this to be accurate for fast rotations and now we can do that because we can accurately separate rotational motion from translational motion.

Furthermore, we need to accurately be able to measure rotation in the phase of high user acceleration and vice versa.

And again, because we can cleanly separate rotation from translation, we can do that.

So I for one can’t wait to see something like this in the app store.

So [background applause] thanks.

Thanks. So what else can we do?

Well, I mentioned the boxing game and that’s pretty neat.

We also see this being very useful for things like flying games where you can very smoothly and accurately point the nose of the aircraft in three dimensions.

Many driving and racing games on the iPhone currently use an estimate of gravity that they extract from the accelerometer in order to allow the user to steer left and write.

So a tilt left and right would be steer.

In some cases a tilt forward and backward would accelerate or brake.

Now, let’s say we add another degree of freedom to this rotation and we take advantage of this rotation around gravity to, for instance aim a turret.

So you can add a whole new capability to your game.

Or let’s say that you have a game that, in which the users driving in some sort of open cockpit car like a go-cart for instance.

And it’s kind of pesky.

You have other drivers occasionally coming up right next to the user and trying to run them off the road.

Wouldn’t it be cool if you can allow the user to shake the device left or right to punch left or right and really get those pesky drivers off their tail?

So our tree games or shooting games in general can really benefit from this.

The most natural rotation to aim left and right is again a rotation around gravity and we can very accurately measure that now.

Simulation games such as skateboarding games can really benefit.

It really expands the space of gestures that you can accurately recognize and use to trigger things like skateboarding tricks.

So we’re really excited about this.

And to demonstrate some of the new capabilities, I like to invite my friend Patrick to the stage.


Patrick Piemonte: Hi everyone.

My name is Patrick.

And before we go, exploring some of the points Chris just talked about.

I’d like to remind you that if you to the conference website and then go to this talk, you can find the same demo associated with this talk.

So I recommend you to check it out.

So here we have an iPhone 4 and I’ll unlock it and we have our Core Motion Teapot app and this is the same App you can download from the conference page and you can see were in an accelerometer mode.

So at first I’d like to point out that if we could look at our Euler angles of pitch.

So if I tilt the device up, you can see the Teapot response and we can do our roll and you can see the Teapot response to roll but you’ll notice that there isn’t yaw.

So what we’re really measuring right now is two components.

We’re measuring both gravity and as Chris said user acceleration.

So as I touch the device, you can see that the Teapot does a little bit of a wiggle.

So that’s the user acceleration Chris described.

And you can see it if I move the device laterally, you can see how the Teapot still has that little wiggle.

Now there’s ways to mitigate this with filtering.

So we have a gravity filter built-in here and I can turn that up and you’ll notice that we reduced that user acceleration.

But the tradeoff is now we have a higher latency.

So you can see the Teapot response a little bit more slowly.

So user acceleration is good because it has it’s some of its applications so that what makes shake gestures possible, pedometers and applications of that nature.

But if you want precise game control, you need to isolate the gravity component and focus on that and remove that user acceleration.

So just a kind of talk about this in a little more detail, we have a translation mode and this translation mode will translate the Teapot a distance that’s proportional to that user acceleration.

So what that means is we’ll redraw the Teapot a distance away from the center depending on how much user acceleration it’s experiencing.

So as you can see it, kind of moves around the screen like a rubber band effect.

And if I roll the device you could see how much the Teapot is moving a distant there.

So now I’ll turn off translation mode and now we’ll go into our device motion algorithms.

So now this is the fusion between the raw gyro and accelerometer data with Apple’s algorithms.

And as you could see immediately, the Teapot is very responsive and we have our Euler angles of pitch and we have roll and now we also have yaw.

[ Applause ]

So, we’ll revisit those points I just talked about with the accelerometer.

So, as you can see if I do lateral movement, the Teapot is very steady.

As you remember in this is accelerometer and now device motion.

There’s a drastic difference.

Now, lastly we can go into translation mode.

Now, before as you could see I was rolling the device and when I roll the device here you could see that the Teapot stayed in the center of the screen.

And if I switch back into accelerometer mode, remember how much the sensor was moving?

So that can give you an idea now using the device motion algorithms how much control you have over the motion of the device and how much you can isolate that motion and separate out the user acceleration.

So now I’d like to hand in back over to Chris.

Chris Moore: Thank you, Patrick.

[ Applause ]

Alright, so now that you guys know, we’ve had an overview of what these new features are.

Let’s talk about how you can access them in your applications and you’ll do that using a new framework that were introducing in iOS 4 called Core Motion and it’s very, very easy to use.

So what can you get from Core Motion?

Well, you can get the raw accelerometer data.

Now, this is exactly the same data that you can currently get from the UIAccelerometer API just with a different interface.

You can also get the raw gyro data that I talked about earlier, as well as this great sensor fused device motion data.

So, let’s talk about availability.

All three types of data are supported on iPhone 4.

But unfortunately, older platforms do not have the gyroscope that’s necessary to get obviously the raw gyro data and device motion.

So only the raw accelerometer data will be available on older hardware.

So what are the main objects in Core Motion?

Well, there’s CMMotionManager which will be your main interface into Core Motion.

It’s how you will start and stop various types of data as well as tell us how often you want to receive different data.

You can also query for what data is available on the platform that your app is running on.

The raw accelerometer data is stored in instances of CMAccelerometer data.

The raw gyro data is stored in instances in instances of CMGyro data, and device motion is stored in CMDeviceMotion.

Now, there’s one other component with the CMDeviceMotion that we’ll spend some time talking about later.

So I wanted to mention it now and that CMAttitude.

So, excuse me, there is no simulator support for Core Motion.

Mac just don’t have the necessary sensors to support this.

So it’s device side only.

So there are two main method s that you can receive data from Core Motion.

You can have Core Motion push data to you and this uses blocks.

So when you use this, you’ll need to provide an NSOperationQueue and a block that will be called once per sample of data.

You can also pull data from Core Motion and we expect most games will actually use this approach.

Here, you periodically asked CMMotionManager for the latest sample of data.

Now, this is often done when your view is updated.

For instance, in a CA display link callback.

And there are some tradeoffs to these two approaches.

So let’s talk about those.

Push has the advantage that you will never miss a sample of data.

You have a queue backing you up.

So if we send you a hundred samples of data in one second, you will get all hundred of those samples.

Now, there is some increased overhead involved in this approach.

And in many, many cases if your threads are running a little behind so that you have multiple samples of data queued up, really the best course of action is simply to cut your losses, ignore the older samples and only focus on that latest sample of data.

We can’t really do that with this approach.

So, we would recommend this approach mostly for data collection apps or if your app is using some algorithm that’s very, very sensitive to drop samples.

Now, I should mention that every sample of data that you get from Core Motion has a time tag associated with it.

So you can always determine how many samples you’ve missed if you do use the pull approach.

Now, this generally is more efficient.

There is usually less code required especially if you already have some periodic event coming in like a CA display link callback that determines when you render a frame and when you want data.

Now, the disadvantage is that you may need an additional timer to tell you when to pull data if you don’t already have one of these periodic CA display link type events coming in.

Despite that, this is what we would recommend for most applications in games.

So, let’s talk about threading very briefly.

Now, Core Motion creates its own thread and it does this to handle data from the sensors as well as run those device motion algorithms that fused the accelerometer in gyro data.

So this has some ramifications for you.

That means that if you have Core Motion push data to you, the only code that will be periodically executed on your thread is the code in the block that you send us.

If you’re pulling data, Core Motion will never interrupt your threads.

So how do you use Core Motion?

Very simple.

There are three steps, setup, retrieving data, and clean-up.

Now, a good place to set things up is for instance in startAnimation, if you have a game.

So here we create a CMMotionManager instance.

All we do is allocate and initialize one.

No arguments.

It’s very simple.

Next, we need to determine whether the data that we’re interested in is available.

And in this example, we’re going to be interested in device motion data.

So we have a property in CMMotionManager called isDeviceMotionAvailable that you want to check.

And as always, if that data is not supported on the platform the user is using you want to fail gracefully or provide some other way for the user to interact with you application.

Next, we set the desired update interval of the data and we do that in this case by setting the deviceMotionUpdateInterval property and we set this to the desired interval in seconds.

So here, let’s say were interested in 60 Hz data, so we set it to 1/60.

Now, I should mention that we’re going to be pulling data in this case but even though we’re doing that we still want to set this desired update interval that will tell Core Motion how often it needs to get data from the sensors.

Next, we start updates.

So were interested in pulling data so all we do is call startDeviceMotionUpdates.

No arguments necessary.

If we were interested in having Core Motion push data to us using blocks, then we would call startDeviceMotionUpdatesToQueue:withHandler to queue with handler and give that NSOperationQueue and the block that we wanted to handle each sample of data.

The next step is to retrieve data.

Now, if we were using blocks, our block will just be executed whenever a new sample of data is available.

Here we’re pulling data so all we need to do is read the appropriate property from our motion manager whenever we’re interested in data.

Here we’re doing it when our view is updated.

So that property for device motion data is just called device motion.

Clean-up. So I should mention that you always want to stop sensor data as soon as you’re done with it.

That will allow us to turn off the necessary sensor hardware and in some cases algorithms that are processing it and ultimately save the users battery.

So, here we started device motion data.

So we simply call stopDeviceMotionUpdates and then we go ahead and release our motionManager instance.

Very, very simple.

So to summarize this section, there are two methods that we can use to receive data, push and pull.

Push uses blocks.

With pull, all you do is periodically look at the desired property of the CMMotionManager, very simple.

Next, all processing by Core Motion is done on its own thread.

Finally, there are three steps to using Core Motion, setup, retrieving data, and cleanup, very simple.

So, now that you have an idea of how to use the API let’s really deep dive into what this new device motion data contains.

So as I mentioned before, device motion data is stored in instances of CMDeviceMotion and that has a few properties.

There’s a property called “attitude” which contains that full 3-dimensional attitude data that’s going to be so incredibly useful.

It also contains an estimate of gravity with respect o the current to the device’s current reference frame.

Now, I mentioned earlier that we can actually extract this estimate from the attitude data directly and I’ll still show you how to do that later on to illustrate a point.

But in reality, you actually won’t need to do that because we extracted for you in CMDeviceMotion and we do that because many of you have applications that currently low-pass filter the accelerometer data to estimate the direction of gravity and we want to make it as easy as possible for you to take advantage of this new sensor fusion of accelerometer in gyro data that we provide.

So all you have to do is start using device motion and look at that gravity property and you can get rid of that low-pass filter that you’re currently using.

We also provided and estimate of user acceleration with CMDeviceMotion without using a high-pass filter that’s much better isolated from rotational motion.

And then the instantaneous rotation rate of the device.

So let’s look at those middle two properties first.

Called the middle two are gravity and user acceleration.

And these are both stored in instances of very simple struct called “CMAcceleration.”

Three components X, Y, and Z giving the acceleration along each of those axis in units of G’s.

And the coordinate frame that we’re using in Core Motion is exactly the same as the one that is currently used by the UIAccelerometer API for the accelerometer data, as well as core location for the raw magnetometer data.

Namely, the Z-axis is positive coming out of the screen of the device, the Y-axis is positive running towards the top of the device, and the X-axis is positive running out of the right side of the device or the Micro-SIM tray on iPhone 4.

Now, rotation rate sorry, one more thing.

So I mentioned that you can get rid of your low-pass filter to isolate gravity using the raw accelerometer data.

Now, if you use device motions gravity output instead.

But you don’t want to completely forget what you know about low-pass filtering because the user acceleration data that you get from device motion will contain noise from the accelerometer.

So you probably want to low-pass filter that to smooth it out a little bit.

So here, we’re showing a very, very simple nonadaptive low-pass filter.

You can get as fancy as you want.

And we encourage you to play around with different filtering parameters and really determine what’s best for your application.

Now, let’s look at rotation rate.

So this property is an instance of CMRotationRate which is again a very, very simple struct giving the rotation rate and units of radians per second around the X, Y, and Z-axis of the device.

Now, if you remember back to physics class, you probably studied the right-hand rule and that’s what we’ll use to determine the direction of positive rotation rate around an axis.

So to review, if you were to take your hands and wrap it around an axis so that your thumb points in a positive direction of the axis, then the direction that your fingers curl will give the direction of positive rotation rate and that shown by the blue arrow on the slide.

Now, I mentioned earlier that CMDeviceMotion’s rotationRate property is different from the raw gyro data that you get from CMGyro data.

How does it differ?

It differs in bias.

So the raw gyro data has a bias associated with it.

Now what that means is if the device is not rotating, so let’s say, it’s sitting flat on a table, then the gyro will not read zero.

It will read some nonzero value that actually differs from device to device and even within a given device, it changes with a lot of things that you really have no control over, things such as the temperature of the gyro.

Now, one of the main jobs of these device motion algorithms that we’ve implemented is to actively track and remove bias from the gyro data.

And we provide that bias corrected gyro data to you through CMDeviceMotion’s rotationRate property.

Now, let’s talk about let’s talk about my favorite property of CMDeviceMotion and that’s that full 3-Dimensional attitude.

Now, you might also call this the orientation of the device.

But fundamentally what it is, is it’s a rotation.

It’s a rotation from a fixed reference frame that Core Motion chooses, and I’ll show you exactly what that is on the next slide.

But it’s a rotation from that reference frame to the device’s current frame.

And as with any rotation, there are a number of ways that you can express it mathematically.

You can use a rotation matrix; you can use a Quaternion; or you can use Euler angles, pitch, roll, and yaw.

And we provide all three of those representations to you through CMAttitude.

Now, let’s talk about this reference frame.

So it’s chosen when your application first calls startDeviceMotionUpdates.

And it’s chosen in such a way that the Z-axis is always vertical.

So that means that if you were to express gravity in this reference frame, it would be equal to the vector 0, 0, -1.

Now, the X and Y-axes are both orthogonal to gravity.

They’re in the horizontal plane.

And of course they’re orthogonal to each other.

But it is not the case for instance that X always aligned with North of Y is always aligned with North.

So they’re rotated in arbitrary amount with respect to North.

So all three reference frames that you see up there now, North, East and Up, X1, Y1, Z1 and X2, Y2, and Z2 are equally valid reference frames for Core Motion to choose.

So let’s go through an example to really hammer this home.

I mentioned earlier that you can extract from that 3-Dimensional attitude an estimate of gravity in the device’s frame.

This is how you do that.

So let’s say we have an instance of device motion and we extract a rotation matrix representation of its attitude.

Now, recall that that attitude is the rotation from that fixed reference frame to the device’s current frame.

Now recall the gravity when expressed in that fixed reference frame is always equal to the vector 0, 0, -1.

So, if we have a function that multiplies a 3×3 matrix by a three-element vector and returns the result, we call that the result will be exactly equal to device motion’s gravity property.

Or mathematically deviceMotion.gravity is equal to that rotation matrix times vector, 0, 0, -1.

So many applications that are currently on the app store that use the accelerometer data to estimate gravity want to measure rotation not from the reference frame for instance, that Core Motion chooses or gravity is always along the – Z-axis.

But they want to measure rotations from some more comfortable resting orientation for the user.

And we provide a function to make that really easy in CMAttitude and that function is called multiplyByInverseOfAttitude.

So how does it work?

Well, let’s look an example.

Let’s say that you’re writing a game and when you render your first frame of animation, you want to grab the device’s orientation and use that as you reference frame.

That’s the reference resting orientation that you want to give the user.

So all you have to do then is retain the in instance of CMAttitude for that orientation.

So here, we’ll do that and will call it referenceAttitude.

Then, on subsequent frames, all you have to do is grab the current attitude from Core Motion and that will give the rotation from Core Motions fixed reference frame to the device’s frame and then you can use this multiplyByInverseOfAttitude function to change the reference frame to variable reference attitude.

So after calling that, variable attitude will contain the rotation from reference attitude to the device’s current frame.

And to demonstrate that, I’d like to invite Patrick back up to the stage.


[ Applause ]

Patrick Piemonte: In device motion, not using accelerometer and just device motion and you notice that we are basically rotating the Teapot right now by the inverse of the attitude coming from Core Motion.

So with the spout aligned here along the X-axis, the rotation is identity.

So that means that the device’s frame is aligned with the frame of Core Motion.

So if your user will probably if your user is playing the game in a different orientation and you’d like to change that reference frame for future updates in your app, you can cache a sample as Chris described and then all future samples you can use that function multiplyByInverseOfAttitude with that cached attitude.

And from that point forward, you’ll have a new reference.

So we do that here in this test application when you hit those reset reference attitude button.

So you see how the spout lined along the X-axis again.

So now the Teapot will rotate from that orientation.

So now, we’re going to take deep dive into the code and look at the same Core Motion Teapot application that we’ve been showing you.

So again, this is available on the session website associated with our talk.

So we have the Teapot project open here.

And the first thing you’d like to do is actually add the Core Motion framework to your project.

So now, we have the project added and the project is pretty small.

We have our app delegate and we have our primary view and then we have an abstraction for our accelerometer filtering.

Now, we’re going to integrate the Core Motion into the view itself.

So here will just drop in our header and I will switch over to our implementation of the view.

And we need to first allocate our Core Motion manager.

So we do that in our start animation.

So we’ll allocate our motion manger.

And once we have our motion manger allocated, now we can configure it.

So we would like to set up our basically, our sensor data intervals.

So you could do that pretty easily with by setting these two properties.

For accelerometer, we have accelerometer update interval.

And for device motion, we have device motion update interval.

And in this particular case, we’re setting it to a 100 hertz for each.

[ Pause ]

So now that we have our motion manager configured, our App will so we could start updating the specific sensor based on the mode our application is in.

So you simply call startAccelerometerUpdates.

And for motion updates, you would just call startDeviceMotionUpdates.

Now, since all iOS devices have shift to the accelerometer, there is a property that check if the accelerometers available but will need to check if device motion is available.

And in this particular application we will enable the Device Motion button on a Teapot app based on the fact if device motion is available or not.

So, on iPhone 4 this will return yes, the device motion is available property and that’s everything to set up.

So, now we have our apps setup and we need to begin drawing.

So if we go to our drawing method, if you attended the game designed sessions on Tuesday morning, they talked about your draw views you should read in you accelerometer input before doing a lot of your drawing in the screen.

And we do the same here.

So we will actually have our I will take in our user input first which is the device motion data.

So for accelerometer data [ Pause ]

we have our accelerometer data properties.

So there’re two styles as Chris mentioned.

We have our block-based mechanism and then we have which is push.

In this particular case, the Teapot app uses the pull method.

So what we are doing is we’re checking the property every time we draw the Teapot on the screen and rendering it with the latest data that Core Motion has seen from the sensors.

And of course, the core in the device motion operation, device motion mode, we can do the same.

So again, we’re checking our device motion property of the motion manager.

And then in that device motion property, we’re grabbing the attitude and that’s what we used to rotate the Teapot.

Now, we had shown this in our second demo the multiplyByInverseOfAttitude.

So if you’ve cached that reference attitude, this is where it actually gets used.

And that’s everything for servicing the data.

Now, just to wrap up when we call stopAnimation.

[ Pause ]

We will stop the accelerometer updates.

Stop our device motion updates and then we’ll perform our clean up.

So that’s how the Teapot app works.

Thank you.

[ Applause ]

Chris Moore: Thanks, man.

[ Applause ]

So recall that the new information that you can get on iPhone 4 through Core Motion is the full 3-Dimensional attitude of the device, a much, much improved estimate of user acceleration that’s not influence by rotational motion and does not require a high-pass filter, as well as the instantaneous rotation rate of the device.

So what do we do with this new sensor under the hood in iPhone 4?

Well, if you went to the Core Location talk yesterday, you probably heard about us using it for GPS aiding and we do this if you specify that you want navigation accuracy in Core Location.

We don’t do it by default because it uses a little extra power and it’s predominantly useful when the device is mounted in the car and you are in areas where the GPS quality is relatively low.

So if you’re driving around downtown San Francisco, that’s a perfect time to use this.

And the gyro will greatly improve position and especially Core’s information that you get from GPS.

This will be a huge help to map matching applications that use both position and course to figure out which road to snap you two.

We also used for compensating under the hood.

So if we detect an abrupt change in magnetic interference by comparing the compass and gyro data then we can coast using the gyro data until that interference goes away.

Let’s talk about what’s more interesting now.

What can you do with this?

Well, we see gaming as being obviously a huge application of it.

And we’ve talked about some things that games could use it for.

But it can also be a huge help to augment in reality applications.

After you get that initial compass fixed, you can switch over to using gyro and get much, much more responsiveness.

3D visualization and mapping apps were really be possible now and much, much more.

We have been absolutely blown away by the amazing things that you guys have done with the accelerometer and the magnetometer and we can’t wait to see what you can do with this new sensor and this new algorithms.

For other sessions, the game design sessions are being repeated tomorrow morning.

Unfortunately, the other three related sessions have already happened but you can check out the videos online.

So if you would like to learn more, you can contact Allan Schaffer, he’s our Graphics Evangelist as well as the documentation and forums that we have available on our Apple Developer website.

Apple, Inc. AAPL
1 Infinite Loop Cupertino CA 95014 US