Contributed by Michael Stanley, Freescale Semiconductor
Originally posted on Freescale’s Smart Mobile Devices Embedded Beat blog
I’ve always been fascinated by electronic sensors. The idea of being able to measure and interact with the physical world appeals to the ten-year-old inside me. Not so long ago, if you needed to measure some physical quantity as an input to your system, you bought an analog sensor, hooked up your own signal conditioning circuitry, and fed the result into a dedicated analog-to-digital converter. Over time, engineers demanded, and got, self-contained products which handled those signal conditioning and conversion tasks for them.
Numeric values were provided via a digital communications port (often SPI or I2C) to the system controller, which acted on that data. More modern sensors added additional logic to offload some of that processing from the system controller. One well known example of this would be the portrait/landscape feature now common in cell phones and devices like Apple’s iPad. At the heart of that decision is a MEMS accelerometer which measures acceleration (and gravity) in X, Y and Z dimensions, and figures out which way is “up.” Simple conceptually. Not so simple under the hood.
The next logical step in this evolution of distributed intelligence is now well underway. A prime example is the advanced MMA9550L Intelligent 3-Axis Accelerometer being introduced by Freescale under its new Xtrinsic trademark. No longer are you restricted to the feature set offered by your sensor. The MMA9550L is programmable! A sensitive 3-axis MEMS transducer has been bundled with a 32-bit ColdFire V1 microcontroller unit (MCU) and on-chip flash memory.
Is it a sensor, or is it an MCU? Actually, it’s both and more. The MMA9550L is fully programmable using a standard BDM interface and CodeWarrior development tools. It comes with half of its flash pre-programmed with a real-time scheduler and variety of filter and gesture recognition functions. Take note of that last phrase: “gesture recognition.” We have now arrived at the point where our sensors are “contextually aware.” The sensor can be programmed to wake up your phone when you lift it to your ear. Or it could instruct the display driver to zoom in when you move the unit towards you while reading the display. What’s really exciting about this is that these decisions can be made without involving the host central processing unit (CPU). That means you can leave your host CPU in the lowest power mode possible until absolutely needed. You save power and extend battery life at the same time you add leading edge features to your product.
Once you integrate MEMS and CPU technologies, a whole new world opens up. Next generation devices will incorporate multiple MEMS devices, and the local CPU will take care of most of the housekeeping and decision making for you. In iSuppli’s May 24 Market Watch newsletter, iSuppli tells us to expect “Colossal Growth in 2010 and Beyond” in the multisensory MEMS industry. They expect sales of multi-sensor devices to hit 305 million units by 2014, and predict that 30% of motion sensors shipped then will be combined with other devices in a single package.
The MMA9550L’s MEMS transducer, 32-bit ColdFire V1 CPU, 16K flash memory, 4K ROM and 2K RAM all fit into a 3X3X1 mm 16 pin LGA package. 10 of those 16 pins double as general purpose I/O pins. So while I’ve presented the device as an obedient slave to a host CPU, it is perfectly capable of acting as the sole CPU in simple systems. It has a separate master/slave I2C port for controlling other devices.
The MMA9550L has been awarded the Freescale Energy-Efficient Solutions mark for its internal power savings features, as well as the power savings at the systems level that it enables. I’m looking forward to exploring details of this little device in future postings. In the meantime, you can get more information on Freescale’s sensing solutions.