[AR] Re: FOGgy idea \ ... vanguarduino?
- From: Dave McMillan <skyefire@xxxxxxxxxxxx>
- To: arocket@xxxxxxxxxxxxx
- Date: Wed, 20 Jan 2016 09:01:00 -0500
On 1/19/2016 12:47 PM, Henry Spencer wrote:
On Mon, 18 Jan 2016, David Gregory wrote:
Which brings me to a question I've often wondered: why aren't mems
accelerometer manufactures leveraging the magic of modern
semiconductor manufacturing putting thousands of sensors on a single
die and averaging them ondie to achieve better accuracy?
Well, averaging noisy sensors only works if the error distribution
is essentially random, as I understand it. At the chip-fab level, I
would expect there is a non-zero chance that a bunch of sensors made on
the same die might well all share the same "skew", thereby "tilting"
their averaged result. So it might still be beneficial to deliberately
mix&match chips from different batches, and/or different manufacturers,
at the "board" level.
Less than a decade ago, a major multinational corporation (name
redacted to protect friends who still work there) lost most of their
global DNS architecture (which was all centralized in one big server
pool near the home office) destroyed over a weekend, because a
long-standing backup/refresh procedure destroyed more than half the
servers due to a bad batch of hard drives -- all from the same
manufacturing batch.
Long story very short: corporate IT policy was to hot-swap one
drive in each server's two-drive RAID array with a new hard drive every
X months, keeping all the drives below a certain maximum age, and using
the RAID auto-rebuild to do this without shutting down any servers. The
manpower economics of this (the DNS team was less than half a dozen
people) made it "smart" to do this to all the servers at once, over a
weekend. They'd been doing this process for so long, no one thought
about it anymore. Until they got through more than 3/4 of the servers,
and noticed that they were getting failure alerts from the servers
they'd already done. The new hard drives all turned out to be bad...
but in a way that destroyed the "good" half of the RAID array, *and*
destroyed the RAID controller card to boot. Cue mad rush (on a
*Sunday*!) to dig up enough hardware (gotta love just-in-time inventory
chains) to get barely half the servers up and limping along before open
of business on Monday....
Same answer: it's an expensive way to improve accuracy. The sensors
are (I believe) rather sizable already, so a big array of them would
be a large, expensive chip with a not-too-impressive accuracy gain.
The key question is, would this be lots better than having the
customer buy a big batch of chips and build his own array? If not,
there's no point.
Almost certainly, the resources and effort are better spent on
improving the sensors and doing better compensation for non-ideal
behavior.
And even if you had a situation where multi-sensor fusion was
necessary, you'd probably need some kind of certification process to
ensure you hadn't accidentally managed to combine *every * sensor in
stock that just happened to share a particular bias....
Other related posts: