Are Hardware Hubs Coming?

Submitted by: Bryon Moyer, Editor of EE Journal

Sensor fusion has been all the rage over the last year. We’ve all watched as numerous companies – both makers of sensors and the “sensor-agnostic” folks – have sported dueling algorithms. Sensor fusion has broadened into “data fusion,” where other non-sensor data like maps can play a part. This drama increasingly unfolds on microcontrollers serving as “sensor hubs.”

But there’s something new stirring. While everyone has been focusing on the algorithms and which microcontrollers are fastest or consume the lowest power, the suggestion is being put forward that the best way to execute sensor fusion software may not be in software: it may be in hardware.

Software and hardware couldn’t be more different. Software is highly flexible, runs anywhere (assuming compilers and such), and executes serially. (So far, no one that I’m aware of has proposed going to multicore sensor fusion for better performance.) Hardware is inflexible, may or may not depend on the underlying platform, and can run blazingly fast because of massive inherent parallelism.

Of course, then there’s the programmable version of hardware, the FPGA. These are traditionally large and power-hungry – not fit for phones. A couple companies – QuickLogic and Lattice – have, however, been targeting phones with small, ultra-low-power devices and now have their eyes on sensor hubs. Lattice markets their solution as a straight-up FPGA; QuickLogic’s device is based on FPGA technology, but they bury that fact so that it looks like a custom part.

Which solution is best is by no means a simple question. Hardware can provide much lower power – unless sensor hub power is swamped by something else, in which case it theoretically doesn’t matter. (Although I’ve heard few folks utter “power” and “doesn’t matter” in the same breath.) Non-programmable hardware is great for standard things that are well-known; software is good for algorithms in flux. Much of sensor fusion is in flux, although it does involve some elements that are well-understood.

Which suggests that this might not just be a hardware-vs-software question: perhaps some portions remain in software while others get hardened. But do you end up with too many chips then? A sensor hub is supposed to keep calculations away from the AP. If done as hardware, that hub can be an FPGA (I can’t imagine an all-fixed-hardware hub in this stage of the game); if done in software, the hub can be a microcontroller. But if it’s a little of both hardware and software, do you need both the FPGA and the microcontroller?

Then there’s the issue of language. High-level algorithms start out abstract and get refined into runnable software in languages like C. Hardware, on the other hand, relies on languages like VHDL and Verilog – very different from software languages. Design methodologies are completely different as well. Converting software to optimal hardware automatically has long been a holy grail and remains out of reach. Making that conversion is easier than it used to be, and tools to help do exist, but it still requires a hardware guy to do the work. The dream of software guys creating hardware remains a dream.

There’s one even more insidious challenge implicit in this discussion: the fact that hardware and software guys all too often never connect. They live in different silos. They do their work during different portions of the overall system design phase. And hardware is expected to be rock solid; we’re more tolerant (unfortunately) of flaws in our software – simply because they’re “easy” to fix. So last-minute changes in hardware involve far whiter knuckles than do such out-the-door fixes in software.

This drama is all just starting to play out, and the outcome is far from clear. Will hardware show up and get voted right off the island? Or will it be incorporated into standard implementations? Will it depend on the application or who’s in charge? Who will the winners and losers be?

Gather the family around and bring some popcorn. I think it’s going to be a show worth watching.

Guest Blog – MEMS New Product Development, Importance of Product Validation (Part 5)

Written by: David DiPaola, DiPaola Consulting, LLC,

Product validation is an essential part of all successful MEMS new product developments.  It is the process of testing products under various environmental, mechanical or electrical conditions to simulate life in an accelerated manner.  Testing early and often needs to be a daily routine and not just a popular phase used in meetings.  This blog will cover proven methods to accurately perform MEMS product validation while mitigating potential issues resulting in repeated tests and non accurate results.

Measurement system analysis or MSA is a methodology to qualify the measurement system that will be used to characterize the product.  In the context of MEMS, this could be a function test system for characterizing the performance of a MEMS pressure sensor by applying known pressures / temperatures and measuring sensor output.  The first step of MSA is to calculate total system accuracy determined by a tolerance stack of subcomponent errors traceable to NIST reference standards.  This will ensure your test system has the accuracy needed to properly characterize the samples.  In addition, system linearity of the true and measured value with minimal bias change and stability of the measurement system over time should be demonstrated.  Lastly, a Gage R&R (using average and range or ANOVA methods) in percent of process variation (not tolerance) should be completed to demonstrate repeatability and reproducibility for each test system utilized.  An excellent reference for MSA is, Measurement System Analysis.

Verification of the test system setup and function of the equipment is an important step prior to the start of validation.  Often times, improper test set up or malfunctioning equipment results in repeated tests and delayed production launches.  This is easily avoidable by documenting proper system setup and reviewing the setup thoroughly (every parameter) prior to the start of the test.  Equally important, the engineer should verify the system outputs are on target using calibrated tools after the tools themselves are verified using a known good reference.

We all like to believe that customer specifications are well thought out and based on extensive field and laboratory data.  Unfortunately, this is not always the case.  Hence it is prudent for engineers to challenge areas of the customers’ specifications that do not appear robust.  Neither the customer nor the supplier wins if the product meets the defined specification but fails in the field.  The pain of such events is pervasive and extremely costly for all parties.   As parts complete laboratory tests, take the added step of comparing the results to similar products in the field at the end of life and ensure similar degraded appearance.  When ever possible, test products to failure in the laboratory setting to learn as much as possible about failure mechanisms.  When testing to failure is not possible, perform the validation to 3 – 5X the customer specification to ensure proper margin exists mitigating the risk of field failures.  Furthermore, always take advantage of field tests even if limited in duration.  They can provide valuable information missed in a laboratory validation.

As briefly stated earlier, a function test or product characterization is the process of applying known inputs such as pressure, force, temperature, humidity, acceleration, rotation, etc. (sometimes two or more simultaneously), measuring the output of the MEMS product and comparing it to the desired target.  This is completed to ensure the product is compliant with the stated performance specification from the manufacturer.  As product life is accelerated through the validation, the device function should be characterized multiple times during the test to understand product drift and approximate time of failures.  It is recommended to perform function tests 3 – 8 times at periodic (equally spaced or skewed) intervals during the validation after the initial pretest characterization.  As an example, I often test products at intervals of 0, 25, 50, 75 and 100% of the validation.

Use of test standards is highly encouraged as it brings both consistency and credibility to validations performed.  Several organizations develop test standards for general use such as ASTM, JEDEC, AEC, Military and more.   When a product is tested to standards widely excepted in the industry, the intended audience is more likely to accept the results than if a non-familiar possibly less stringent test method was applied.  Some commonly used standards include ASTM B117 (salt spray), JEDEC JESD22-A106B (thermal shock), Automotive Electronics Council AEC-Q100 (stress test for integrated circuits) and MIL-STD-883 (various environmental tests) just to mention a few.  A list of validation standards used across the MEMS industry can be found in the MEMS Industry Group Member Resource Library, Standards Currently in Use at MEMS Companies.

In the validation of MEMS products, it is tempting to perform the testing on units from one wafer that has yielded 1000 pieces.  However, this is a single window in time and does not properly reflect the true process variation that can occur.  A better sampling approach for validation is taking units from multiple wafers within a lot and across multiple wafer lots.  Equally important, differing raw material lots should be used (one example is the starting SOI wafers).  This will ensure supplier, equipment, process, operator and time sensitive factors are well understood.

Controls are another method to learn valuable information about the products being validated and the equipment being used.  A basic control could be as simple as a product that is function tested at each stage of the test, but does not go through any of the validation (i.e. sits on a shelf at room temperature).  This will give an indication if something has gone wrong with your test system should the same errors be seen in both experimental (parts going through validation) and control groups.  Another use of a control is testing a product that has previously passed a given validation (control group) while simultaneously testing a product that has under gone a change or is entirely new (experimental group).  This will provide information on whether the change had any impact on the device performance or if the new device is as capable as a previous generation.

Lastly validation checklists are a valuable tool to ensure each test is set up properly before the test begins.  Without the checklist , it is easy to over look a step in pursuit of starting the test on time to meet a customer’s schedule.  Below is a sample validation check lists for thermal shock.  This can be modified for other tests as well.

Thermal Shock Validation Checklist

  • Perform proper preventative maintenance on the environmental chambers before the start of the test to prevent malfunction during the test
  • Identify appropriate control and experimental groups and ensure proper sampling from multiple wafers and lots

Document sample sizes

  • Identify a proper validation standard or customer specification to define the test
  • Document pass / fail criteria for the devices under test
  • Create a test log and record any time an event occurs (i.e. start of test, end of test, devices removed from thermal chamber for testing, etc.)
  • Verify calibration of measurement reference and trace it back to a national standard
  • Verify the measurement reference with appropriate simple test.  (i.e. thermal couple’s accuracy and repeatability with boiling water, room temperature, ice water and other known sources)
  • Measure the temperature of the hot and cold chambers with an accurate and verified reference prior to the start of the test (i.e. thermal couple ± 1°C)

Verify chamber temperature is consistent across the part loading

  • Verify the time it takes the thermal load to reach the desired temperature (i.e. -40°C) and that its within test guidelines
  • Measure the transition time between hot and cold chambers and verify its within test guidelines
  • Complete all necessary MSA on test equipment and document the results
  • Engrave serial number on each device (paint pen can be easily removed)
  • Document the location of devices in environmental chamber with digital photograph
  • Record serial number and manufacturer for environmental chambers used
  • Determine and document periodic intervals for device function test
  • Continuously monitor environmental chamber temperature for the duration of the test using an appropriate chart recorder

Document location of thermal couple (photo) and verify it is located close to parts

  • Monitor device output continuously during the test
  • Check on the environmental chamber daily to ensure no malfunctions have occurred and monitor daily cycle count
  • Create a test in process sign with appropriate contact information for support staff

This will likely prevent individuals from accidentally turning off the environmental chamber or changing temperature profiles without notifying you

  • Document any changes to this specification for future reference

Product validation is a critical tool to learn about MEMS performance over a laboratory based accelerated life.  Its an excellent method to validate theory and ensure product robustness in the field.  The due diligence presented in this blog will help engineers avoid seemly small mistakes that cause repeated tests, inaccurate results and missed customer deadlines.


BioDavid DiPaola is Managing Director for DiPaola Consulting a company focused on engineering and management solutions for electromechanical systems, sensors and MEMS products.  A 17 year veteran of the field, he has brought many products from concept to production in high volume with outstanding quality.  His work in design and process development spans multiple industries including automotive, medical, industrial and consumer electronics.  He employs a problem solving based approach working side by side with customers from start-ups to multi-billion dollar companies.  David also serves as Senior Technical Staff to The Richard Desich SMART Commercialization Center for Microsystems, is an authorized external researcher at The Center for Nanoscale Science and Technology at NIST and is a Senior Member of IEEE. Previously he has held engineering management and technical staff positions at Texas Instruments and Sensata Technologies, authored numerous technical papers, is a respected lecturer and holds 5 patents.  To learn more, please visit

Guest Blog: RoboBusiness Conference Review

Originally posted by Michael E Stanley in The Embedded Beat on Nov 1, 2013

Last week I had the opportunity to present a tutorial on sensor fusion at the RoboBusiness Conference in Santa Clara.  It was my first time at RoboBusiness, and I thoroughly enjoyed attending sessions, wandering the show floor and talking with other attendees.  As you might guess, this conference is all about BUSINESS enabled by robotics.  Neat technology by itself is of only academic interest.  The attendees are interested in making MONEY.  The theme of this year’s show was “Invest, Innovate, Implement”.

One of my first destinations was to visit Baxter, developed by Rethink Robotics.  You might have read about Baxter in the IEEE Spectrum article “How Rethink Robotics Built Its New Baxter Robot Worker“.  Baxter represents a new breed of robot intended to work along side people.  If a human gets in Baxter’s way, he yields instead of knocking the dumb human on his or her keister.

It’s also easy to tell Baxter what you want to do.  Simply grip both sides of his “wrist” and Baxter’s arm enters a “weightless state”.  You simply move his arm where you want it to go and program waypoints via a number of buttons on his arm.  No computer needed.  I experimented with the weightless mode, and it really is easy to move him about.

MikeAndBaxter.jpgMike meets Baxter

Freescale had a booth on the show floor.  Immediately across from us was Velodyne. These are the folks that make the Lidar system you’ll see on top of the Google map cars that roam the nation.  Velodyne had one of their systems set up on the show floor, generating a real-time image of the surrounding area (see image below).  The system is responsive enough that you can clearly see people walking around the floor.  These same systems will be similar to those used in high-end robots for mapping and collision avoidance.

Veladyne.jpgVelodyne Lidar Display

I’ve seen telepresence robots at the last couple Consumer Electronics Shows that I’ve attended.  As you might expect, they were out in force at this event, courtesy of Robotics Trends and Suitable Technologies in the form of Beam robots (below).  At one point in the show, I found myself in a conversation with two other individuals, both attending via Beam.  One from India and another from the U.S. east coast.  There were no noticeable bandwidth or delay issues.  And my remote friends informed me that the Beams have multiple cameras, including one that enabled them to see my feet – so they would not bump into things.  With a top speed of over 3MPH, that’s a good thing!

IMG_20131023_174907.jpgBeam telepresence robots at their charging stations

One of the keynote sessions was entitled “Deploying 20 Autonomous Mobile Robots in a Hospital” by Aldo Zini of Aethon and Ken King of El Camino Hospital.  They mentioned that nurses spend, on average, 50% of their time on logistical and administrative tasks.  A lot of this time is spent dealing with the hospital supply chain: medication, supplies, food, linen and trash.  Delivery of these is complicated by the large size of the average hospital campus, variation in delivery size (a few pills versus a stack of clean linen) and the need for timely delivery.  The Aethon tug (shown below) can be customized for different tasks.  Some of the machines at El Camino deliver drugs, others handle linen, etc.  Because the machines are able to work 24/7/365, the hospital found that they needed less floor space devoted to “staging areas”, and could re-purpose that space for revenue generating purposes.

AethonTug.jpgThe Aethon Tug

Field robotics is another huge area of growth.  This might include field preparation, spraying for pests, herding, harvest automation and more. Autonomous Solutions, Inc. is one of the companies working in this space.  They offer kits to retrofit vehicles for remote use.

asirobotsDotCom.jpgASI Robotic Vehicle

Another interesting keynote was entitled “Big Data Meets Big Agriculture: UAV Solutions for Modern Farming” by Bandon Basso of 3D Robotics.  Brandon presented example data sets collected via UAVs that clearly identify variation in plant health across a farm’s growing area.  Because the UAVs are self-guided, a farmer can launch the vehicle and then go for coffee while data is collected.  The farmer can then make intelligent decisions with regard to where to fertilize, add/decrease water, etc.   Decreases in UAV costs as well as the ease of data analysis (which can now be done in an hour or two), mean that farmers now have the option to perform this type of analysis multiple times over the course of a single growing season.

I got a chance to play with a really interesting haptics demonstrator by Barrett Technology Inc. You grasp a ball at the end of a robot arm mounted on a tripod (below left).  Moving that ball in space causes a virtual ball on a computer screen (below right) to move about an enclosed “room”.  Each surface (4 walls, ceiling & floor) of the room are modeled as being composed of some different material.  For instance, the floor was grooved and the right wall as magnetic. When you “rolled” the ball across those surfaces, you could feel the interaction of those surfaces with the ball you held in your hand.

BarnettTechnolgy.jpgBarrett Technology haptics demonstrator

Robotic arms have been a staple of the industry for generations, and there was no shortage at RoboBusiness.  c-link Systems (not shown) actually shared space in Freescale’s booth.  Others shown below (from left to right) include ABB Robotics, Schunk, and Universal Robots.  ABB has a 43 page catalog of robot solutions, ranging from controllers, track systems, positioners, point robots and more.  Another company to look at in this space (again, not shown), is Synapticon.

abb.jpgRobot Arm from ABB Robotics

Schunk.jpgRobot Arm from Schunk

UniversalRobots.jpgRobot Arm from Universal Robots

The roving robot shown below was developed by Unbounded Robotics. That company was the winner of the PITCHFIRE event, where startup firms pitched their companies and products for the venture capital community.  Unbounded Robotics has a nice video of the UBR-1 robot in action, which you should definitely view.

UnboundedRobotics.jpgUnbounded Robotics UBR-1

The VEX Robotics Design System (below) brings the old erecter set into the 21st century, offering everything from complete robot kits to a-la-carte ordering of individual components.  Their products are tailored for STEM education, but will make even experienced engineers drool in anticipation.

Vex.jpgVex Robot Kits

As I mentioned earlier, this was my first RoboBusiness conference.  As a novice in the field, I came away with a number of lessons learned:

  • Robots have gone mainstream, affecting many more areas of the economy than just manufacturing.
  • The community seems to think that they’ve solved the navigation problem.  Autonomous UAVs can now navigate a pre-defined flight plan, avoiding unplanned obstacles on their own initiative, and even selecting their own landing sites.  Ground-based TUG robots can roam buildings on their own, with no major infrastructure to enable that navigation.  And SLAM (Simultaneous Localization and Mapping) techniques have progressed to the point where one presenter was able to show a full three dimensional model of the Tower of Pisa that was generated in 20 minutes.
  • The Robot Operating System (ROS) is the dominant toolset used by the industry today, although other options are still in use.  ROS is also consistent with the concept of Cloud Robotics.
  • iRobot and ABB were two of the big players at the show.  Most of the other companies I saw were much smaller.  There is still a lot of innovation and entrepeneurship going on.
  • The industry DOES have an up to date, and detailed roadmap: A Roadmap for U.S. Robotics From Internet to Robotics, 2013 Edition
  • 50% of the pilots in training today will be drone pilots
  • The adoption of co-worker robots is leading to re-shoring of jobs.  That is, bringing jobs BACK to the U.S.
  • Some industries (particularly fast food) may face job losses as robotic technologies are adopted.
  • In 20 years, coast-to-coast air freight shipping will be unmanned in the U.S.
  • Take a few minutes to visit the website of the Robotics Virtual Organization
  • If you are interested in personal UAVs, visit
  • Primesense and the Microsoft Kinect have revolutionized robot vision, navigation and mapping by dramatically lowering the cost of vision hardware.
  • ROS Industrial is extending ROS into industrial settings.

I arrived on site in Santa Clara around mid-day on Wednesday the 23rd of October and headed home about 4PM the following Friday.  Hopefully the information above gives you a rough idea of just how jam-packed the conference and show were with information.  A good time was had by all!

Guest Blog: Design Enablement and the Emergence of the Near Platform

Guest blog post written by: Peter Himes, Silex Microsystems;
Introduced by: Karen Lightman, MEMS Industry Group

I am pleased to bring you this blog by Silex Microsystem’s Peter Himes, vice president marketing & strategic alliances. Peter reflects on MEMS and while other might lament at the conundrum of the uniqueness of all MEMS process (you can hum it to the tune initially coined by Jean Christophe Eloy of “one process, one product”) Peter instead sees opportunity. Through this challenge, Peter sees opportunity for innovation and collaboration. And what pleases me the most about his musings on MEMS is that the basic thesis that is my mantra:  “to succeed in MEMS, you can’t go at it alone – you must partner.” In this example he describes Silex’s partnership with A.M. Fitzgerald and Associates and their Rocket MEMS program. Read on, plug in and share your thoughts on how you’ve creatively sparked innovation in your own company; especially if you come up with the same reflection: in MEMS, it takes a village; you can’t go at it alone.

Design Enablement and the Emergence of the Near Platform

What does it mean to enable a MEMS design? Is it enough to have silicon wafers, a clean room and some tools? What bridges the idea to product?

Traditionally it has meant a series of trials based on past experiences on conceiving of a process flow which results in the final desired structure. What steps are possible? What materials can be used? How will it react to the process and how will it perform after all processing is done? All of these questions need to be understood simultaneously. Being able to do this consistently over many different projects is how Silex helps the most innovative MEMS companies get their ideas to high volume manufacturing.

But in markets where MEMS is becoming mainstream, where acceptance of MEMS technologies is encouraging traditional and non-traditional customers alike to consider their own MEMS programs, is this enough to enable the rapid growth of MEMS going forward? Is every MEMS device trapped in a paradigm of custom process development and new materials development? Does everything require MEMS PhD expertise to engineer a perfect solution? In a market where customers are looking for customized MEMS devices AND rapid time to market, can they have both?

The core of MEMS still lies in the custom process integration and the universe of MEMS devices is still expanding, pushed by the dark energy of innovation. Our SmartBlock™ approach to process integration is why we can execute on these challenges in a consistent and high quality way. But it still takes the time and effort of customized processes to achieve full production qualification, so we also believe that another model is possible, and we are beginning to see it emerge.

Process integration into a foundry environment is something we also call Design Enablement, because a successful MEMS process enables designs to be turned into an actual product. But the power of design enablement is somewhat muted if the echo only rings once. The true power of Design Enablement is when the process can resonate over many products or many redesigns of the same product. This would break the “one product, one process” paradigm and is what we believe is the next phase in the MEMS industry.

Rocket MEMS

Alissa Fitzgerald of AMFitzgerald & Associates had a dilemma and an idea. To her, the normal route for MEMS development was difficult from the start: begin with an idea and use a university or research lab to get a prototype out. Once it is successful, contact a production MEMS foundry to manufacture it – only to find out that there are still months or years of process qualification ahead. What if she could collaborate with a foundry from the start and define a product design platform and a process flow simultaneously? Using known process capabilities of an existing foundry, build and characterize the product to that process, so that both the processing window and the product spec windows are defined simultaneously. Then you have a process platform that is solid, “de-risked,” and ready to take customers to market quickly.

This is the idea behind the AMFitzgerald RocketMEMS program and Silex’s support and partnership in the initiative. And it results in something which is not fully customized for each new product, yet is not completely and rigidly fixed either. Rather, it is a “Near Product Platform” made possible by the design enablement of the Silex process integration approach and AMFitzgerald’s product design framework and methodology. It allows for product specific variability without breaking the mold out of which the process was cast.

And it works.