Karen’s Blog – Pittsburgh IMAPS Workshop

Packaging means a lot of different things to a lot of different people. Webster’s dictionary defines package as a “group or a number of things, boxed and offered as a unit.”

For my school-age daughters, packaging means figuring out how to maximize the components of their lunch into these bento-box-like containers I bought at Target in hopes that it would simplify their packaging and assembling process (at low cost and decent performance, mind you). Two months into the school year the packaging appears to be weathering extreme temperatures (cold fridge to hot dishwasher), drop-tests (I am sure you need no explanation here) and what I can only describe as a “cram test” (how many Oreos can you fit inside without the box breaking or my parents noticing).

But if you are in the microelectronics/MEMS industry, when you hear the word packaging your mind goes to the various MEMS packages that can contain a multitude of electrical and mechanical components that are inter-connected to the outside world for devices such as MEMS microphones, airbag accelerometers, gyros, RF MEMS and the list just goes on and on.

I had the pleasure to learn more about the challenges and opportunities affecting MEMS packaging at a recent International Microelectronics Assembly and Packaging Society (IMAPS) workshop held in my hometown of Pittsburgh and at my alma mater, Carnegie Mellon University (CMU). Presenters included our host, Gary Fedder, CMU’s Director of the Institute for Complex Engineered Systems (ICES); Maarten de Boer, CMU Associate Professor, Mechanical Engineering; Brett Diamond, MEMS Development Manager, Akustica; Erdinc Tatar, CMU Graduate Student; and yours truly.

To say that my presentation was different from the others is a gross understatement – I talked about the potential for MEMS and sensors in the expanding world of Internet of Things (IoT) as well as an overview of MEMS/sensors standardization and the proactive role that MEMS Industry Group (MIG) and my partners/members/colleagues are playing in addressing the remaining challenges to commercialization. You can access my presentation on the MIG resource library webpage (no password required).

As the others’ presentations are not posted (at least to my knowledge) I figure I’d give you a quick synopsis of what I learned and heard. Gary basically gave an overview about how amazing and fantastic CMU’s engineering, robotics and computer science departments are and that CMU is now partnering and working with universities and centers around the globe. Literally. They even have two programs going on in China.

Maarten’s presentation on the “Effect of Gas Environment and Materials on Electrical Contact Reliability in Micro- and Nanoswitches” was illuminating as I am somewhat familiar with the work that GE Global Research is doing on RF MEMS switches and am aware of the incredible market potential for this area (I wrote a featured blog on this topic for GE’s “Edison’s Desk” earlier this year). Maarten and his colleagues at CMU are taking this a bit further, by looking into different materials and applications at the nano scale.

Brett’s presentation on “Challenges in the Design, Manufacturing, and Usage of MEMS Microphones” was really impressive as it gave a very in-depth view of the true challenges of packaging a device that by design needs to be open to the environment. No small task and it was equally exciting to hear Brett hint at the future applications and integrations with their MEMS mic’s (I will not repeat them here at the risk of disclosing something I shouldn’t). But let’s just say that the market applications for MEMS microphones are just at the beginning – the potential is really big.

Erdinc’s presentation on “Environmental and Packaging Effects on High-Performance Gyroscopes” revealed why so many engineers love their work in the lab – as they are able to tinker and explore with new materials and processes. It’s another reason why I love my work in MEMS/sensors – because there is still an opportunity for “new science.”

MIG helped sponsor the event by providing snacks (including some great chocolate cookie/pie things that melted in my mouth) for the attendees to enjoy while attending the workshop and to facilitate networking. What I learned at the workshop confirmed what I suspected before – packaging is in the eye of the beholder – and at the end of the day what really matters is that the package is at a cost that is reflective of its application and performance expectations.  Therefore, it’s important to communicate those expectations from both the user and supplier’s perspectives.

Packaging means a lot of different things and if done well it can mean the difference between success and failure. Or in my daughters’ case, deciding on how many Oreos to fit into the package before it fails and Mom finds out.

To access Karen’s presentation, click here.

Industry Survey: The Southwest Center for Microsystems Education

Submitted by The Southwest Center for Microsystems Education

The Southwest Center for Microsystems Education, a National Science Foundation Advanced Technological Education Center, is working on a project to better understand the current state of the micro and nanotechnology based industry technician workforce. Through this project, we aim to enable our center to best support Community Colleges’ efforts to start micro and nano technology programs that use SCME developed curricula.

One goal of this project will be a map of the related, hi-tech industries relative to their local Community Colleges.  We can then identify which regions our programs will make the greatest impact.  This allows us to advocate and support the adoption of micro and nano education by Community Colleges on behalf of their regional micro and nano and related industries.

Click here to view the last revision of the map and hiring data through 2012.

Our second goal will be a trend analysis of several mapped industries.  The SCME has divided the micro-nano related industries into several categories based on specialty and industry revenue.  We aim to identify at least ten companies in each bracket and to determine their workforce needs so that we can target our educational impact efforts to yield the best results for both industry and education!  These trends are presented to the Community Colleges near micro and nano tech related clusters, to provide a justification for incorporating microsystems based curricula into their programs.  This enables the SCME to distribute scarce educational resources into the educational institutions where their impact will be the highest, resulting in a more informed and capable workforce.

This is where we need your help!   As leaders in MEMS and related industries, please completing the survey found by clicking the following link:

https://www.surveymonkey.com/s/RK6TG55

Aggregate findings will be shared with you as well as information pertaining to educational resources that will assist you as you build your technician workforce pipeline and enable you to be in a better position to plan workforce growth.  Please consider collaborating with SCME to support our shared industrial workforce educational improvement goals!

 

Previous MIG Blog:

http://memsblog.wordpress.com/2011/09/16/survey-the-southwest-center-for-microsystems-education/

Building a virtual gyro

Originally posted by Michael E Stanley of Freescale Semiconductor in The Embedded Beat on Mar 12, 2013

In Orientation Representations Part 1 and Part 2, we explore some of the mathematical ways to represent the orientation of an object. Now we’re going to apply that knowledge to build a virtual gyroscope using data from a 3-axis accelerometer and 3-axis magnetometer. Reasons you might want to do this include “cost” and “cost”. Cost #1 is financial. Gyros tend to be more expensive than the other two sensors. Eliminating them from the BOM is attractive for that reason.  Cost #2 is power. The power consumed by a typical accel/mag pair is significantly less than that consumed by a MEMS gyro. The downside of a virtual gyro is that it is sensitive to linear acceleration and uncorrected magnetic interference. If either of those is present, you probably still want a physical gyro.

So how do we go from orientation to angular rates? It’s conceptually easy if you step back and consider the problem from a high level. Angular rate can be defined as change in orientation per unit time. We already know lots of ways to model orientation. Figure out how to take the derivative of the orientation and we’re there!

In our prior postings, we’ve discussed a number of ways to represent orientation. For this discussion, we will use the basic rotation matrix. Jack B. Kuipers has a nice derivation of the derivative of direction cosine matrices in his “Quaternions and Rotation Sequences” text – one of my most used textbooks.  It makes a good starting point.  Paraphrasing his math:

Let:

  1. vf = some vector v measured in a fixed reference frame
  2. vb = same vector measured in a moving body frame
  3. RMt = rotation matrix which takes vf into vb
  4. ω = angular rate through the rotation

Then at any time t:

  1. vb= RMt vf

Differentiate both sides (use the chain rule on the RHS):

  1. dvb/dt  = (dRMt/dt) vf + RMt(dvf /dt)

Our restrictions on no linear acceleration or magnetic interference imply that:

  1. dvf/dt = 0

Then:

  1. dvb/dt  = (dRMt/dt) vf

We know that:

  1. vf = RMt-1 vb

Plugging this into (8) yields

  1. dvb/dt  = (dRMt/dt) RMt-1 vb

In a previous posting (Accelerometer placement – where and why) , we learned about the transport theorem, which describes the rate of change of a vector in a moving frame:

dvf/dt = dvb/dt – ω X vb

Those who take the time to check will note that we have inverted the polarity of the ω in Equation 11 from that shown in the prior posting.  In that case ω was the angular velocity of the body frame in the fixed reference frame.  Here we want it from the opposite perspective (which would match gyro outputs).

And again,

  1. dvf/dt = 0 so
  2. dvb/dt = ω X vb

Equating equations 10 and 13:

  1. ω X vb = (dRMt/dt) RMt-1vb
  2. ω X = (dRMt/dt) RMt-1

where:

  1. 0 z ωy
    ω X = ωz 0 x
    y ωx 0

Going back to the fundamentals in our first calculus course and using a one-sided approximation to the derivative:

  1. dRMt/dt = (1/Δt)(RMt+1 – RMt)

where Δt = the time between orientation samples

  1. ωb X = (1/Δt)(RMt+1 – RMt) RMt-1

Recall that for rotation matrices, the transpose is the same as the inverse:

  1. RMtT = RMt-1
  2. ωb X = (1/Δt)(RMt+1 – RMt) RMtT

Equation 15 is a truly elegant equation.  It shows that you can calculate angular rates based upon knowledge of only the last two orientations.  That makes perfect intuitive sense, and I’m ashamed when I think how long it took me to arrive at it the first time.

An alternate form that is even more attractive can be had by carrying out the multiplications on the RHS:

  1. ωb X = (1/Δt)(RMt+1 RMtT – RMt RMtT)
  2. ωb X = (1/Δt)(RMt+1 RMtT – I3×3)

For the sake of being explicit, let’s expand the terms.  A rotation matrix has dimensions 3×3.  So both left and right hand sides of Eqn. 22 have dimensions 3×3.

  1. (1/Δt)(RMt+1 RMtT – I3×3)  = (1/Δt) W
  1. 0 W1,2 W1,3
    W = RMt+1 RMtT – I3X3 = W2,1 0 W2,3
    W3,1 W3,2 0

The zero value diagonal elements in W result from small angle approximations since the diagonal terms on RMt+1 RMtT will be close to one, which will be canceled by the subtraction of the identity matrix.  Then:

  1. 0 z y 0 W1,2 W1,3
    ω X = z 0 x =  (1/Δt) W2,1 0 W2,3
    y x 0 W3,1 W3,2 0

and we have:

  1. ωx= (1/2Δt) (W3,2 – W2,3)
  2. ωy= (1/2Δt) (W1,3 - W3,1)
  3. ωz= (1/2Δt) (W2,1 - W1,2)

Once we have orientations, we’re in a position to compute corresponding angular rates with

  • One 3×3 matrix multiply operation
  • 3 scalar subtractions
  • 3 scalar multiplications

at time each point.  Sweet!

Some time ago, I ran a Matlab simulation to look at outputs of a gyro versus outputs from a “virtual gyro” based upon accelerometer/magnetometer readings.  After adjusting for gyro offset and scale factors, I got pretty good correlation, as can be seen in the figure below.

image001.gif

You will notice that we started with an assumption that we already know how to calculate orientation given accelerometer/magnetometer readings.  There are many ways to do this.  I can think of three off the top of my head:

  • Compute roll, pitch and yaw as described in Freescale AN4248.  Use those values to compute rotation matrices as described in Orientation Representations: Part 1.  This approach uses Euler angles, which I like to stay away from, but you could give it a go.
  • Use the Android getRotationMatrix [4] to compute rotation matrices directly.  This method uses a sequence of cross-products to arrive at the current orientation.
  • Use a solution to Wahba’s problem to compute the optimal rotation for each time point.  This is my personal favorite, but I think I’ll save further explanation for a future posting.

Whichever technique you use to compute orientations, you need to pay attention to a few details:

  • Remember that non-zero linear acceleration and/or uncorrected magnetic interference violate the physical assumptions behind the theory.
  • The expressions shown generally rely on a small angle assumption.  That is, the change in orientation from one time step to the next is relatively small.  You can encourage this by using a short sampling interval.  You should soon see an app note that my colleague Mark Pedley is working on that discards that assumption and deals with large angles directly.   I like the form I’ve shown here because it is more intuitive.
  • Noise in the accelerometer and magnetometer outputs will result in very visible noise in the virtual gyro output.  You will want to low pass filter your outputs prior to using them.  Mark will be providing an example implementation in his app note.

This is one of my favorite fusion problems.  There’s a certain beauty in the way that nature provides different perspectives of angular motion.  I hope you enjoy it also.

References

  1. Freescale Application Note Number AN4248: Implementing a Tilt-Compensated eCompass using Accelerometer and Magnetometer Sensors
  2. Orientation Representations: Part 1 blog posting on the Embedded Beat
  3. Orientation Representations: Part 2 blog posting on the Embedded Beat
  4. getRotationMatrix() function defined at http://developer.android.com/reference/android/hardware/SensorManager.htmlWikipedia entry for “Wahba’s problem”
  5. U.S. Patent Application 13/748381, SYSTEMS AND METHOD FOR GYROSCOPE CALIBRATION, Michael Stanley, Freescale Semiconductor

MIG visits Tohoku University, Sendai, Japan

Contributed by Karen Lightman, Managing Director, MEMS Industry Group

My journey through Japan continued with a trip up to Sendai (which is 96 minutes north of Tokyo by Shinkansen), at the invitation of Professor Esashi-sensai at Tohoku University. Takeo Oita-san of NDK accompanied me at my visit to Sendai.  We were greeted at the station by Katou Hiroyuki –san and Ms. Emi Ooba, both with the Commercialization Support Sub-section, Industrial-Academic Collaboration Promotion Section, Economic Affairs Bureau, Sendai City.  Their focus is to promote Sendai as the “best location” for R&D. Along with their director, Hiroyuki Miyata, I was very humbled and impressed with their hospitality and graciousness. Continue reading

The Zen of Sensor Design

Contributed by Mike Stanley

Originally posted on Freescale’s Smart Mobile Devices Embedded Beat Blog

About two years ago, I joined the Freescale sensors team, which focuses on accelerometers, pressure sensors, and touch sensors.

Prior to that, I spent a number of years in the Freescale’s microcontroller solutions group, where I was an architect for several digital signal controller and microcontroller product families. One of the first things I learned when I moved into the sensors group was that certain “rules of the game” that relate to microcontroller design needed to be adapted when dealing with sensors. An example is package selection. With most microcontrollers, package selection is based upon number of functional and power pins required, PCB assembly processes targeted and (sometimes) thermal characteristics. Performance considerations are often secondary, if they exist at all. Sensors interact with the real world. Mechanical stresses introduced during both package assembly and PCB mounting can affect electrical performance of the device; often showing up as additional offset or variation of performance with temperature. Even the compound used for die attach has a demonstrable effect on sensor performance, and must be considered early in the design process. Continue reading

Evolving Intelligence with Sensors

Contributed by Michael Stanley, Freescale Semiconductor

Originally posted on Freescale’s Smart Mobile Devices Embedded Beat blog

I’ve always been fascinated by electronic sensors. The idea of being able to measure and interact with the physical world appeals to the ten-year-old inside me. Not so long ago, if you needed to measure some physical quantity as an input to your system, you bought an analog sensor, hooked up your own signal conditioning circuitry, and fed the result into a dedicated analog-to-digital converter. Over time, engineers demanded, and got, self-contained products which handled those signal conditioning and conversion tasks for them. Continue reading

What in the World is Contextual Sensing?

Contributed by Michael Stanley, Freescale Semiconductor

Originally posted on Freescale’s Smart Mobile Devices Embedded Beat blog

You can’t use your phone, drive your car or even nuke a sandwich without relying on one or more electronic sensors to help you complete the task.  Their use has become ubiquitous, and most people are blissfully unaware of just how much they depend on them in their daily lives. Continue reading