|
David Staten wrote:
In Al's defense, my interpretation is that the CAS is a critical "single point of failure". While none of us have developed a failure at that point, a failure there is a show stopper. He was taking a "theoretical" failure risk and doing the math with it. Going from a 1:1000 chance of failure to a 1:1,000,000 chance of failure is a significant result.
And I think what most others have been saying is that we don't know that it isn't already a 1:1,000,000 chance. Adding the complexity to have two sensors and the necessary interaoperability drops the probability of each one down to 1:1000. So now you've added some weight, fabrication and maintenance headaches to get back to square one. The CAS hasn't demonstrated any problems, so why go chasing through the bushes after rabbits. Most of the people taking exception with Al are saying that the method can't be applied because it requires a historical dataset and there's no history to base it on. Without the history, it's all meaningless voodoo. The CAS issue has caught a lot of attention, because Al used it as an example for how the EC2 is dangerously unreliable, and most of us (being new to FMEA) now view the technique with a jaundiced eye.
To take the theoretical into practical, the Shuttle guys THOUGHT they had a 1:200 flight risk of catastrophic failure. 2 destroyed orbiters later, that risk is actually playing out to 1:50. It's about risk management, and the CAS is just one (of MANY) single points of failure that can be identified. That being said, there are many more risks out there that take precedence.
Dave
And there could still be a 1:200 flight risk of catastrophic failure. The 50 flights so far were just unlucky. That's the problem with statistics. --
,|"|"|, |
----===<{{(oQo)}}>===---- Dyke Delta |
o| d |o www.ernest.isa-geek.org |
|
|