In a recent one, the past couple of days has us seeing Toyota trying to disprove one professor by using another. Nothing new about that; science is supposed work via a type of dialectic. But, gamemanship, and such, as the main focus? (A third professor's view)
That people have experienced, and suffered the consequences of, some type of failure of auto control systems is the one fact not to be forgotten.
And, one would hope that the illustrious car company, whose reputation has always been associated with value and quality, will take the high road and do the right thing. Which is? (Note: 03/12/10 -- ah, they are, here is their web site related to recalls)
Well, Congress' panels ought to take the thing to its full extent, or at least as far as their monied selves will let them (definition of the politico set, of late, is characterized by a salivation response to the stimulus of money under the nose).
The Administration needs to live up to its responsibility. Which is? Act as mediator for disparate views (see below) and be an advocate for us, the wee people.
Now, for some of the other roles and views. Engineers need (and ought to be allowed) to work out the technical issues as they may. However, that we're talking hardware (operational and computational - with software for the latter) means that it's a new game. Perhaps, this dilemma might be an opportunity to learn some necessary lessons.
Lawyers? Well, several things here. But, one of these ought not be suppressing the truth by manipulating fear. That counsel goes for both sides. Whose first principles apply?
People, and drivers? Wake the heck up to the growing presence of little hidden 'bots' (general use of a term which is meant to imply that the genie has been let out of the bottle) that can play havoc with us if we don't pay attention. But what are we to do?
Evidently, for awhile, there has been a growing computational framework for the driving platform due to our technical progress of the last few decades. Surprise! Of course, this has been obvious, but we've let ourselves trust too much.
Too, some of us seem to have gone to a total zombie state (you know who you are), in this sense: mind warped by, and wrapped around, an abstracted, thereby virtual, cloud via thumb and eye, in the driving mode, to the extent that the vehicle has become a mindless WMD with no one in control.
Such behavior, on the part of the reckless, has allowed the potentials for devastation, for the driver and for those unlucky souls in the presence of the driver, to increase inordinately.
Of course, we might use that last condition as evidence of the influence that the 'gaming' generation's upbringing (computation in their blood, almost) has had some insidious repercussions that seriously need our attention.
That comment is not being negative; it is more this: the issues related to truth engineering need to be at the forefront (more below) of both an analytic and corrective (in the sense of control theory) stance.
Does failsafe mean anything to the people? Well, we could think of several connotations that are general, but that any of these might pertain to some type of increased certainty needs to be lifted to view for some serious scrutiny.
Why? For a computational system to be risk minimal, there are several things that need to happen. For one, all domains need to be decidable. Okay, sub-domains will be decidable. Any arbitrary collection of these will not, by default, be decidable. It (that is, the order and stability that we all desire) has to be imposed.
By the way, for now, just think of decidable as subsuming stability, convergence, and other numerically behaved properties (technical geeks, be patient, we'll get there - quasi-empiricism).
Too, even if we attain the desired, decidable, state, it may have been due to some type of cleverness and accomplished by a man-in-the-loop. Yet, that 'man' is not everyman but is someone who needs to be cognizant of the decisional issues, to be awake (aware, see above, zombie reference (state of mindless flow (apologies to Buddha) which is insensitive to subtle changes in the related sensors)), and to be experienced (one impetus for the growing use of simulation/visualization in the modern world).
What is decidable? Well, it is not something to assume. Some designers will argue that they've accomplished the necessary assumption set for closure. Ah! In an open framework, such as we see with driving, there is no proof of this. Why do you think that new products require so much testing? Even those with mathematical, and modeling, support need some type of empirical workout (an airplane, for example).
You see, there is really no basic mathematical foundation for these types of decisions that alleviates us of the need for caution. Why do you think that managers and lawyers get involved? Too, the required, onerous, aspects are costly both in time and resources (money, people) so we get the bean counters involved.
Engineers know these things, but they are human and can err.
Who to believe? Well, this is solvable, folks, but requires some effort. We'll use the situation to explore some of the issues and propose possible outcomes.
Leaping ahead: how can we get the proper stable computational bases for these operational entities without some serious cooperation across the industry to the extent, even, of having common platforms?
What? Yes, any creative, and competitive advantage, aspect probably ought to be limited to a small region of performance that could easily be tested.
So, let's bring the concepts of failsafe to fore. This will take a few posts, but here is a short list, for starters.
- Failsafe - acknowledges the possibility of failure and attempts some type of prevention (and lessening) of consequences
- Failure mode - recognition of these is necessary to know how to manage the failures even if they are to occur with minuscule probability
- Who to believe? - yes, in the small world of a system, there will be sensors, decisioners (bow to the great GWB), connectors, and more which at any time can be in conflict (computational sense - constraint satisfaction, NP, meaning hard) about what they know and what to do; too, there will be need some type of rectification (we might say, auto programmers, like quants, ignore complexity) even if the situation is undecidable (that's one reason for overrides, folks - yet, a mere brake override is not sufficient)
We all learn conflict resolution in kindergarten or earlier. As we know, conflicts are real, some may even say essential (think ca-pital-sino). And, we know that power from the top-down (hammering down the opposition, for the politocos) can suppress the truth, thereby causing easy (or so it seems) resolution (which usually is a partial solution).
That this situation exists can allow a more complete discussion of some important issues (subsequent posts).
Remarks:
01/22/2013 -- USA Today story on settlements. From three years ago, lest we forget.
02/26/2011 -- Another go.
02/08/2011 -- There was a report today concerning a study on the SUA problem that has been going on quietly. More news will be coming later when the report is technically analyzed.
10/07/2010 -- Several principles need to be explored, such as the ergodic one.
09/28/2010 -- It nice to see the IEEE weigh in. Notice: sensors galore, drive in the loop, ...
04/19/2010 -- Genies, no not genius, indeed!
03/15/2010 -- Response to Toyota by Safety Research & Strategies, Inc. Did Toyota really use 'infallible' in describing their systems? One professor seems to think so.
03/12/2010 -- Of course, some media may be better than others. Popular Mechanics has a good article about the Toyota issue. Notice the comments: a wide range, some seemingly coherent and well expressed. However, that the overall tone of the article suggests user error as the chief problem is something that ought to be analyzed itself. For instance, if the accelerator and the brake pedals are being confused by the foot and mind of the driver, is that not indicative that someone didn't think to cause sufficient means to differentiate the two? Ah, ought there not to have been some studies to 'optimize' such selection? One problem with the modern world, folks, is that we apply set discriminators (in general, separation algorithms) without regard to some of the nuances that apply. Oh yes. One criticism of Lean (and Toyota's system) is that is cuts to the quick in a very efficient manner; at the same time, these actions leave a state in which it is very hard to recover traces sufficient for analysis, many times. That is one thing the NHTSA ought to look at in terms of what might be necessary to perform ex-post-facto diagnosis.
03/12/2010 -- Forgot to mention one player, namely the media, especially TV. Guys/gals, don't monkey with reports in order to emphasize some viewpoint, such as ABC is accused of doing. Oh, it's just editing, they claim. Wait, aren't media businesses just like the car makers driven by market share and profit? ... The question was asked: doesn't the public know that Toyota, with all its resources, hasn't looked at the underlying issues? Well, we also know that Toyota publicly admitted moving negatively on a quality line in order to put resources toward expansion of the market share. How to attain balance? Good question in that even the best-and-brightest fail regularly.
Modified: 01/22/2013
03/15/2010 -- Response to Toyota by Safety Research & Strategies, Inc. Did Toyota really use 'infallible' in describing their systems? One professor seems to think so.
03/12/2010 -- Of course, some media may be better than others. Popular Mechanics has a good article about the Toyota issue. Notice the comments: a wide range, some seemingly coherent and well expressed. However, that the overall tone of the article suggests user error as the chief problem is something that ought to be analyzed itself. For instance, if the accelerator and the brake pedals are being confused by the foot and mind of the driver, is that not indicative that someone didn't think to cause sufficient means to differentiate the two? Ah, ought there not to have been some studies to 'optimize' such selection? One problem with the modern world, folks, is that we apply set discriminators (in general, separation algorithms) without regard to some of the nuances that apply. Oh yes. One criticism of Lean (and Toyota's system) is that is cuts to the quick in a very efficient manner; at the same time, these actions leave a state in which it is very hard to recover traces sufficient for analysis, many times. That is one thing the NHTSA ought to look at in terms of what might be necessary to perform ex-post-facto diagnosis.
03/12/2010 -- Forgot to mention one player, namely the media, especially TV. Guys/gals, don't monkey with reports in order to emphasize some viewpoint, such as ABC is accused of doing. Oh, it's just editing, they claim. Wait, aren't media businesses just like the car makers driven by market share and profit? ... The question was asked: doesn't the public know that Toyota, with all its resources, hasn't looked at the underlying issues? Well, we also know that Toyota publicly admitted moving negatively on a quality line in order to put resources toward expansion of the market share. How to attain balance? Good question in that even the best-and-brightest fail regularly.
Modified: 01/22/2013
No comments:
Post a Comment