Building a virtual gyro

Originally posted by Michael E Stanley of Freescale Semiconductor in The Embedded Beat on Mar 12, 2013

In Orientation Representations Part 1 and Part 2, we explore some of the mathematical ways to represent the orientation of an object. Now we’re going to apply that knowledge to build a virtual gyroscope using data from a 3-axis accelerometer and 3-axis magnetometer. Reasons you might want to do this include “cost” and “cost”. Cost #1 is financial. Gyros tend to be more expensive than the other two sensors. Eliminating them from the BOM is attractive for that reason.  Cost #2 is power. The power consumed by a typical accel/mag pair is significantly less than that consumed by a MEMS gyro. The downside of a virtual gyro is that it is sensitive to linear acceleration and uncorrected magnetic interference. If either of those is present, you probably still want a physical gyro.

So how do we go from orientation to angular rates? It’s conceptually easy if you step back and consider the problem from a high level. Angular rate can be defined as change in orientation per unit time. We already know lots of ways to model orientation. Figure out how to take the derivative of the orientation and we’re there!

In our prior postings, we’ve discussed a number of ways to represent orientation. For this discussion, we will use the basic rotation matrix. Jack B. Kuipers has a nice derivation of the derivative of direction cosine matrices in his “Quaternions and Rotation Sequences” text – one of my most used textbooks.  It makes a good starting point.  Paraphrasing his math:

Let:

  1. vf = some vector v measured in a fixed reference frame
  2. vb = same vector measured in a moving body frame
  3. RMt = rotation matrix which takes vf into vb
  4. ω = angular rate through the rotation

Then at any time t:

  1. vb= RMt vf

Differentiate both sides (use the chain rule on the RHS):

  1. dvb/dt  = (dRMt/dt) vf + RMt(dvf /dt)

Our restrictions on no linear acceleration or magnetic interference imply that:

  1. dvf/dt = 0

Then:

  1. dvb/dt  = (dRMt/dt) vf

We know that:

  1. vf = RMt-1 vb

Plugging this into (8) yields

  1. dvb/dt  = (dRMt/dt) RMt-1 vb

In a previous posting (Accelerometer placement – where and why) , we learned about the transport theorem, which describes the rate of change of a vector in a moving frame:

dvf/dt = dvb/dt – ω X vb

Those who take the time to check will note that we have inverted the polarity of the ω in Equation 11 from that shown in the prior posting.  In that case ω was the angular velocity of the body frame in the fixed reference frame.  Here we want it from the opposite perspective (which would match gyro outputs).

And again,

  1. dvf/dt = 0 so
  2. dvb/dt = ω X vb

Equating equations 10 and 13:

  1. ω X vb = (dRMt/dt) RMt-1vb
  2. ω X = (dRMt/dt) RMt-1

where:

  1. 0 z ωy
    ω X = ωz 0 x
    y ωx 0

Going back to the fundamentals in our first calculus course and using a one-sided approximation to the derivative:

  1. dRMt/dt = (1/Δt)(RMt+1 – RMt)

where Δt = the time between orientation samples

  1. ωb X = (1/Δt)(RMt+1 – RMt) RMt-1

Recall that for rotation matrices, the transpose is the same as the inverse:

  1. RMtT = RMt-1
  2. ωb X = (1/Δt)(RMt+1 – RMt) RMtT

Equation 15 is a truly elegant equation.  It shows that you can calculate angular rates based upon knowledge of only the last two orientations.  That makes perfect intuitive sense, and I’m ashamed when I think how long it took me to arrive at it the first time.

An alternate form that is even more attractive can be had by carrying out the multiplications on the RHS:

  1. ωb X = (1/Δt)(RMt+1 RMtT – RMt RMtT)
  2. ωb X = (1/Δt)(RMt+1 RMtT – I3×3)

For the sake of being explicit, let’s expand the terms.  A rotation matrix has dimensions 3×3.  So both left and right hand sides of Eqn. 22 have dimensions 3×3.

  1. (1/Δt)(RMt+1 RMtT – I3×3)  = (1/Δt) W
  1. 0 W1,2 W1,3
    W = RMt+1 RMtT – I3X3 = W2,1 0 W2,3
    W3,1 W3,2 0

The zero value diagonal elements in W result from small angle approximations since the diagonal terms on RMt+1 RMtT will be close to one, which will be canceled by the subtraction of the identity matrix.  Then:

  1. 0 z y 0 W1,2 W1,3
    ω X = z 0 x =  (1/Δt) W2,1 0 W2,3
    y x 0 W3,1 W3,2 0

and we have:

  1. ωx= (1/2Δt) (W3,2 – W2,3)
  2. ωy= (1/2Δt) (W1,3 – W3,1)
  3. ωz= (1/2Δt) (W2,1 – W1,2)

Once we have orientations, we’re in a position to compute corresponding angular rates with

  • One 3×3 matrix multiply operation
  • 3 scalar subtractions
  • 3 scalar multiplications

at time each point.  Sweet!

Some time ago, I ran a Matlab simulation to look at outputs of a gyro versus outputs from a “virtual gyro” based upon accelerometer/magnetometer readings.  After adjusting for gyro offset and scale factors, I got pretty good correlation, as can be seen in the figure below.

image001.gif

You will notice that we started with an assumption that we already know how to calculate orientation given accelerometer/magnetometer readings.  There are many ways to do this.  I can think of three off the top of my head:

  • Compute roll, pitch and yaw as described in Freescale AN4248.  Use those values to compute rotation matrices as described in Orientation Representations: Part 1.  This approach uses Euler angles, which I like to stay away from, but you could give it a go.
  • Use the Android getRotationMatrix [4] to compute rotation matrices directly.  This method uses a sequence of cross-products to arrive at the current orientation.
  • Use a solution to Wahba’s problem to compute the optimal rotation for each time point.  This is my personal favorite, but I think I’ll save further explanation for a future posting.

Whichever technique you use to compute orientations, you need to pay attention to a few details:

  • Remember that non-zero linear acceleration and/or uncorrected magnetic interference violate the physical assumptions behind the theory.
  • The expressions shown generally rely on a small angle assumption.  That is, the change in orientation from one time step to the next is relatively small.  You can encourage this by using a short sampling interval.  You should soon see an app note that my colleague Mark Pedley is working on that discards that assumption and deals with large angles directly.   I like the form I’ve shown here because it is more intuitive.
  • Noise in the accelerometer and magnetometer outputs will result in very visible noise in the virtual gyro output.  You will want to low pass filter your outputs prior to using them.  Mark will be providing an example implementation in his app note.

This is one of my favorite fusion problems.  There’s a certain beauty in the way that nature provides different perspectives of angular motion.  I hope you enjoy it also.

References

  1. Freescale Application Note Number AN4248: Implementing a Tilt-Compensated eCompass using Accelerometer and Magnetometer Sensors
  2. Orientation Representations: Part 1 blog posting on the Embedded Beat
  3. Orientation Representations: Part 2 blog posting on the Embedded Beat
  4. getRotationMatrix() function defined at http://developer.android.com/reference/android/hardware/SensorManager.htmlWikipedia entry for “Wahba’s problem”
  5. U.S. Patent Application 13/748381, SYSTEMS AND METHOD FOR GYROSCOPE CALIBRATION, Michael Stanley, Freescale Semiconductor

Explore “MEMS Sensor Fusion/Sensor Networks”

November 2-3, 2011
Monterey Plaza Resort & Spa
An annual executive forum promoting the commercialization of MEMS

As engineers combine MEMS devices with integrated circuits to create feature-rich heterogeneous environments, embedded systems have become more complex. Now smartphones may include accelerometers, gyroscopes, magnetometers, silicon microphones, RF MEMS, and even micro-displays, to produce a “full package” of intelligence for end-user applications. These new systems require “sensor fusion,” which intelligently combines data, software and processing from different sensors (including MEMS). And while consumer products and automotive systems top the list for high-volume MEMS consumption, biomedical/healthcare, industrial and energy increasingly use MEMS technology as well.

The panel will discuss the critical issues affecting MEMS systems integration such as: What is the future of “MEMS sensor fusion”—integrating MEMS with ICs and software in the same application (and even the same package) while meeting or exceeding cost, power and performance requirements? How are MEMS sensors producing more intelligent, and pervasive, wireless networks? How might technologies such as Hewlett-Packard’s “Central Nervous System of the Earth” (CeNSE) change our information infrastructure—and what is its potential impact on society?

Featured Speakers

Moderator: Michael Jamiolkowski, President & CEO, Coventor

Panelists:

Please join us Thursday, November 3, 2011, 1:45-2:30 p.m., for the panel, MEMS Sensor Fusion/Sensor Networks, at the seventh annual MEMS Executive Congress.

Register today for MEMS Executive Congress

As a business rather than a technical conference, MEMS Executive Congress provides a unique forum for MEMS solution providers and OEM integrators to exchange ideas and information during panel discussions and networking events. This truly unique two-day event is the year’s must-attend conference for the entire MEMS supply chain.

If you have not registered yet, you can do so via the link below:

Register Now

For more information, please visit www.memscongress.com.

The Zen of Sensor Design

Contributed by Mike Stanley

Originally posted on Freescale’s Smart Mobile Devices Embedded Beat Blog

About two years ago, I joined the Freescale sensors team, which focuses on accelerometers, pressure sensors, and touch sensors.

Prior to that, I spent a number of years in the Freescale’s microcontroller solutions group, where I was an architect for several digital signal controller and microcontroller product families. One of the first things I learned when I moved into the sensors group was that certain “rules of the game” that relate to microcontroller design needed to be adapted when dealing with sensors. An example is package selection. With most microcontrollers, package selection is based upon number of functional and power pins required, PCB assembly processes targeted and (sometimes) thermal characteristics. Performance considerations are often secondary, if they exist at all. Sensors interact with the real world. Mechanical stresses introduced during both package assembly and PCB mounting can affect electrical performance of the device; often showing up as additional offset or variation of performance with temperature. Even the compound used for die attach has a demonstrable effect on sensor performance, and must be considered early in the design process. Continue reading

Is MEMS Design Holding Back MEMS Growth?

Via EDA company Cadence’s community blog: Bringing MEMS Design To The Mainstream.

The article states that some factors are needed for a greater use of MEMS as a component in systems–standard MEMS design flows, integration between MEMS and EDA software tools, standard MEMS foundry processes. Interesting stuff. What do you think, MEMSbloggers? Would this spark a “tiny revolution” in MEMS adoption?

Read the full blog post and share your thoughts.